2025-04-10 00:00:09.878816 | Job console starting... 2025-04-10 00:00:09.890251 | Updating repositories 2025-04-10 00:00:10.505557 | Preparing job workspace 2025-04-10 00:00:12.709667 | Running Ansible setup... 2025-04-10 00:00:20.090472 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-10 00:00:21.494281 | 2025-04-10 00:00:21.494405 | PLAY [Base pre] 2025-04-10 00:00:21.574106 | 2025-04-10 00:00:21.574221 | TASK [Setup log path fact] 2025-04-10 00:00:21.617330 | orchestrator | ok 2025-04-10 00:00:21.667715 | 2025-04-10 00:00:21.667834 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-10 00:00:21.738453 | orchestrator | ok 2025-04-10 00:00:21.772704 | 2025-04-10 00:00:21.772809 | TASK [emit-job-header : Print job information] 2025-04-10 00:00:21.846187 | # Job Information 2025-04-10 00:00:21.846409 | Ansible Version: 2.15.3 2025-04-10 00:00:21.846443 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-04-10 00:00:21.846467 | Pipeline: periodic-midnight 2025-04-10 00:00:21.846484 | Executor: 7d211f194f6a 2025-04-10 00:00:21.846500 | Triggered by: https://github.com/osism/testbed 2025-04-10 00:00:21.846515 | Event ID: cc7bf6afd9b244b8be9d21f19cefa0e1 2025-04-10 00:00:21.854054 | 2025-04-10 00:00:21.854145 | LOOP [emit-job-header : Print node information] 2025-04-10 00:00:22.103426 | orchestrator | ok: 2025-04-10 00:00:22.103561 | orchestrator | # Node Information 2025-04-10 00:00:22.103587 | orchestrator | Inventory Hostname: orchestrator 2025-04-10 00:00:22.103607 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-10 00:00:22.103624 | orchestrator | Username: zuul-testbed03 2025-04-10 00:00:22.103640 | orchestrator | Distro: Debian 12.10 2025-04-10 00:00:22.103659 | orchestrator | Provider: static-testbed 2025-04-10 00:00:22.103675 | orchestrator | Label: testbed-orchestrator 2025-04-10 00:00:22.103691 | orchestrator | Product Name: OpenStack Nova 2025-04-10 00:00:22.103707 | orchestrator | Interface IP: 81.163.193.140 2025-04-10 00:00:22.126654 | 2025-04-10 00:00:22.126756 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-10 00:00:23.316499 | orchestrator -> localhost | changed 2025-04-10 00:00:23.338366 | 2025-04-10 00:00:23.338466 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-10 00:00:25.670077 | orchestrator -> localhost | changed 2025-04-10 00:00:25.683156 | 2025-04-10 00:00:25.683246 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-10 00:00:26.216821 | orchestrator -> localhost | ok 2025-04-10 00:00:26.223639 | 2025-04-10 00:00:26.223721 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-10 00:00:26.277003 | orchestrator | ok 2025-04-10 00:00:26.314477 | orchestrator | included: /var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-10 00:00:26.328955 | 2025-04-10 00:00:26.329055 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-10 00:00:28.440794 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-10 00:00:28.441002 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/work/d16e7eccd07141d892bb7c877e6612d7_id_rsa 2025-04-10 00:00:28.441039 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/work/d16e7eccd07141d892bb7c877e6612d7_id_rsa.pub 2025-04-10 00:00:28.441064 | orchestrator -> localhost | The key fingerprint is: 2025-04-10 00:00:28.441089 | orchestrator -> localhost | SHA256:A/1zKUn1ufhWpS4LBMRg9zps7uPu2ArnZxvqKinm/jw zuul-build-sshkey 2025-04-10 00:00:28.441110 | orchestrator -> localhost | The key's randomart image is: 2025-04-10 00:00:28.441130 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-10 00:00:28.441150 | orchestrator -> localhost | | ooo . | 2025-04-10 00:00:28.441169 | orchestrator -> localhost | | . +.. . . . | 2025-04-10 00:00:28.441197 | orchestrator -> localhost | | . o o o .| 2025-04-10 00:00:28.441217 | orchestrator -> localhost | | o = . o o.| 2025-04-10 00:00:28.441236 | orchestrator -> localhost | | S * + o .| 2025-04-10 00:00:28.441256 | orchestrator -> localhost | | o + + o . | 2025-04-10 00:00:28.441280 | orchestrator -> localhost | | .. . o . . + | 2025-04-10 00:00:28.441300 | orchestrator -> localhost | |..+E + =+. . + | 2025-04-10 00:00:28.441319 | orchestrator -> localhost | |++.+oo*BBo . | 2025-04-10 00:00:28.441338 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-10 00:00:28.441400 | orchestrator -> localhost | ok: Runtime: 0:00:00.868651 2025-04-10 00:00:28.459028 | 2025-04-10 00:00:28.459128 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-10 00:00:28.490159 | orchestrator | ok 2025-04-10 00:00:28.502904 | orchestrator | included: /var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-10 00:00:28.511704 | 2025-04-10 00:00:28.511789 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-10 00:00:28.545837 | orchestrator | skipping: Conditional result was False 2025-04-10 00:00:28.552751 | 2025-04-10 00:00:28.552835 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-10 00:00:29.323687 | orchestrator | changed 2025-04-10 00:00:29.332956 | 2025-04-10 00:00:29.333050 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-10 00:00:29.617218 | orchestrator | ok 2025-04-10 00:00:29.626210 | 2025-04-10 00:00:29.626303 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-10 00:00:30.127311 | orchestrator | ok 2025-04-10 00:00:30.134920 | 2025-04-10 00:00:30.135012 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-10 00:00:30.573690 | orchestrator | ok 2025-04-10 00:00:30.583105 | 2025-04-10 00:00:30.583191 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-10 00:00:30.636979 | orchestrator | skipping: Conditional result was False 2025-04-10 00:00:30.644485 | 2025-04-10 00:00:30.644568 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-10 00:00:31.262039 | orchestrator -> localhost | changed 2025-04-10 00:00:31.274650 | 2025-04-10 00:00:31.274741 | TASK [add-build-sshkey : Add back temp key] 2025-04-10 00:00:31.615790 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/work/d16e7eccd07141d892bb7c877e6612d7_id_rsa (zuul-build-sshkey) 2025-04-10 00:00:31.615964 | orchestrator -> localhost | ok: Runtime: 0:00:00.024357 2025-04-10 00:00:31.623427 | 2025-04-10 00:00:31.623511 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-10 00:00:32.009745 | orchestrator | ok 2025-04-10 00:00:32.016409 | 2025-04-10 00:00:32.016496 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-10 00:00:32.042769 | orchestrator | skipping: Conditional result was False 2025-04-10 00:00:32.057644 | 2025-04-10 00:00:32.057747 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-10 00:00:32.484144 | orchestrator | ok 2025-04-10 00:00:32.508626 | 2025-04-10 00:00:32.508721 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-10 00:00:32.543329 | orchestrator | ok 2025-04-10 00:00:32.549701 | 2025-04-10 00:00:32.549786 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-10 00:00:33.030169 | orchestrator -> localhost | ok 2025-04-10 00:00:33.038569 | 2025-04-10 00:00:33.038656 | TASK [validate-host : Collect information about the host] 2025-04-10 00:00:34.381610 | orchestrator | ok 2025-04-10 00:00:34.419409 | 2025-04-10 00:00:34.419504 | TASK [validate-host : Sanitize hostname] 2025-04-10 00:00:34.580931 | orchestrator | ok 2025-04-10 00:00:34.586836 | 2025-04-10 00:00:34.586918 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-10 00:00:35.466267 | orchestrator -> localhost | changed 2025-04-10 00:00:35.480764 | 2025-04-10 00:00:35.480857 | TASK [validate-host : Collect information about zuul worker] 2025-04-10 00:00:36.051739 | orchestrator | ok 2025-04-10 00:00:36.060820 | 2025-04-10 00:00:36.060910 | TASK [validate-host : Write out all zuul information for each host] 2025-04-10 00:00:36.726939 | orchestrator -> localhost | changed 2025-04-10 00:00:36.737494 | 2025-04-10 00:00:36.737578 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-10 00:00:37.026941 | orchestrator | ok 2025-04-10 00:00:37.039823 | 2025-04-10 00:00:37.039916 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-10 00:01:01.850713 | orchestrator | changed: 2025-04-10 00:01:01.850882 | orchestrator | .d..t...... src/ 2025-04-10 00:01:01.850916 | orchestrator | .d..t...... src/github.com/ 2025-04-10 00:01:01.850939 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-10 00:01:01.850960 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-10 00:01:01.850979 | orchestrator | RedHat.yml 2025-04-10 00:01:01.865249 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-10 00:01:01.865265 | orchestrator | RedHat.yml 2025-04-10 00:01:01.865316 | orchestrator | = 1.53.0"... 2025-04-10 00:01:14.793666 | orchestrator | 00:01:14.793 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-04-10 00:01:14.873570 | orchestrator | 00:01:14.873 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-04-10 00:01:16.198962 | orchestrator | 00:01:16.198 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-10 00:01:17.355411 | orchestrator | 00:01:17.355 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-10 00:01:18.613521 | orchestrator | 00:01:18.613 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-10 00:01:19.652429 | orchestrator | 00:01:19.652 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-10 00:01:20.904831 | orchestrator | 00:01:20.904 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-10 00:01:21.995808 | orchestrator | 00:01:21.995 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-10 00:01:21.995926 | orchestrator | 00:01:21.995 STDOUT terraform: Providers are signed by their developers. 2025-04-10 00:01:21.995957 | orchestrator | 00:01:21.995 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-10 00:01:21.995979 | orchestrator | 00:01:21.995 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-10 00:01:21.995997 | orchestrator | 00:01:21.995 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-10 00:01:21.996018 | orchestrator | 00:01:21.995 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-10 00:01:21.996052 | orchestrator | 00:01:21.995 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-10 00:01:21.996067 | orchestrator | 00:01:21.996 STDOUT terraform: you run "tofu init" in the future. 2025-04-10 00:01:21.996641 | orchestrator | 00:01:21.996 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-10 00:01:21.996684 | orchestrator | 00:01:21.996 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-10 00:01:21.996757 | orchestrator | 00:01:21.996 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-10 00:01:21.996809 | orchestrator | 00:01:21.996 STDOUT terraform: should now work. 2025-04-10 00:01:21.996840 | orchestrator | 00:01:21.996 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-10 00:01:21.996860 | orchestrator | 00:01:21.996 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-10 00:01:21.996880 | orchestrator | 00:01:21.996 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-10 00:01:22.161202 | orchestrator | 00:01:22.160 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-10 00:01:22.356654 | orchestrator | 00:01:22.356 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-10 00:01:22.356781 | orchestrator | 00:01:22.356 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-10 00:01:22.356917 | orchestrator | 00:01:22.356 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-10 00:01:22.356937 | orchestrator | 00:01:22.356 STDOUT terraform: for this configuration. 2025-04-10 00:01:22.599710 | orchestrator | 00:01:22.599 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-10 00:01:22.708256 | orchestrator | 00:01:22.707 STDOUT terraform: ci.auto.tfvars 2025-04-10 00:01:22.948727 | orchestrator | 00:01:22.948 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-10 00:01:23.889682 | orchestrator | 00:01:23.889 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-10 00:01:24.452636 | orchestrator | 00:01:24.452 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-10 00:01:24.679332 | orchestrator | 00:01:24.679 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-10 00:01:24.679446 | orchestrator | 00:01:24.679 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-10 00:01:24.679465 | orchestrator | 00:01:24.679 STDOUT terraform:  + create 2025-04-10 00:01:24.679497 | orchestrator | 00:01:24.679 STDOUT terraform:  <= read (data resources) 2025-04-10 00:01:24.679718 | orchestrator | 00:01:24.679 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-10 00:01:24.679744 | orchestrator | 00:01:24.679 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-10 00:01:24.679775 | orchestrator | 00:01:24.679 STDOUT terraform:  # (config refers to values not yet known) 2025-04-10 00:01:24.679793 | orchestrator | 00:01:24.679 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-10 00:01:24.679812 | orchestrator | 00:01:24.679 STDOUT terraform:  + checksum = (known after apply) 2025-04-10 00:01:24.679830 | orchestrator | 00:01:24.679 STDOUT terraform:  + created_at = (known after apply) 2025-04-10 00:01:24.679864 | orchestrator | 00:01:24.679 STDOUT terraform:  + file = (known after apply) 2025-04-10 00:01:24.679894 | orchestrator | 00:01:24.679 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.679926 | orchestrator | 00:01:24.679 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.679944 | orchestrator | 00:01:24.679 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-10 00:01:24.679979 | orchestrator | 00:01:24.679 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-10 00:01:24.680005 | orchestrator | 00:01:24.679 STDOUT terraform:  + most_recent = true 2025-04-10 00:01:24.680024 | orchestrator | 00:01:24.679 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.680064 | orchestrator | 00:01:24.680 STDOUT terraform:  + protected = (known after apply) 2025-04-10 00:01:24.680098 | orchestrator | 00:01:24.680 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.680116 | orchestrator | 00:01:24.680 STDOUT terraform:  + schema = (known after apply) 2025-04-10 00:01:24.680212 | orchestrator | 00:01:24.680 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-10 00:01:24.680237 | orchestrator | 00:01:24.680 STDOUT terraform:  + tags = (known after apply) 2025-04-10 00:01:24.680495 | orchestrator | 00:01:24.680 STDOUT terraform:  + updated_at = (known after apply) 2025-04-10 00:01:24.680514 | orchestrator | 00:01:24.680 STDOUT terraform:  } 2025-04-10 00:01:24.680538 | orchestrator | 00:01:24.680 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-10 00:01:24.680573 | orchestrator | 00:01:24.680 STDOUT terraform:  # (config refers to values not yet known) 2025-04-10 00:01:24.680589 | orchestrator | 00:01:24.680 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-10 00:01:24.680612 | orchestrator | 00:01:24.680 STDOUT terraform:  + checksum = (known after apply) 2025-04-10 00:01:24.680627 | orchestrator | 00:01:24.680 STDOUT terraform:  + created_at = (known after apply) 2025-04-10 00:01:24.680645 | orchestrator | 00:01:24.680 STDOUT terraform:  + file = (known after apply) 2025-04-10 00:01:24.680682 | orchestrator | 00:01:24.680 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.680700 | orchestrator | 00:01:24.680 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.680735 | orchestrator | 00:01:24.680 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-10 00:01:24.680752 | orchestrator | 00:01:24.680 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-10 00:01:24.680782 | orchestrator | 00:01:24.680 STDOUT terraform:  + most_recent = true 2025-04-10 00:01:24.680799 | orchestrator | 00:01:24.680 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.680835 | orchestrator | 00:01:24.680 STDOUT terraform:  + protected = (known after apply) 2025-04-10 00:01:24.680854 | orchestrator | 00:01:24.680 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.680889 | orchestrator | 00:01:24.680 STDOUT terraform:  + schema = (known after apply) 2025-04-10 00:01:24.680910 | orchestrator | 00:01:24.680 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-10 00:01:24.680944 | orchestrator | 00:01:24.680 STDOUT terraform:  + tags = (known after apply) 2025-04-10 00:01:24.680964 | orchestrator | 00:01:24.680 STDOUT terraform:  + updated_at = (known after apply) 2025-04-10 00:01:24.681170 | orchestrator | 00:01:24.680 STDOUT terraform:  } 2025-04-10 00:01:24.681197 | orchestrator | 00:01:24.681 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-10 00:01:24.681213 | orchestrator | 00:01:24.681 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-10 00:01:24.681233 | orchestrator | 00:01:24.681 STDOUT terraform:  + content = (known after apply) 2025-04-10 00:01:24.681253 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-10 00:01:24.681288 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-10 00:01:24.681322 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-10 00:01:24.681363 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-10 00:01:24.681393 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-10 00:01:24.681429 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-10 00:01:24.681448 | orchestrator | 00:01:24.681 STDOUT terraform:  + directory_permission = "0777" 2025-04-10 00:01:24.681466 | orchestrator | 00:01:24.681 STDOUT terraform:  + file_permission = "0644" 2025-04-10 00:01:24.681506 | orchestrator | 00:01:24.681 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-10 00:01:24.681543 | orchestrator | 00:01:24.681 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.681562 | orchestrator | 00:01:24.681 STDOUT terraform:  } 2025-04-10 00:01:24.681778 | orchestrator | 00:01:24.681 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-10 00:01:24.681798 | orchestrator | 00:01:24.681 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-10 00:01:24.681839 | orchestrator | 00:01:24.681 STDOUT terraform:  + content = (known after apply) 2025-04-10 00:01:24.681874 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-10 00:01:24.681909 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-10 00:01:24.681944 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-10 00:01:24.681979 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-10 00:01:24.682014 | orchestrator | 00:01:24.681 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-10 00:01:24.682073 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-10 00:01:24.682103 | orchestrator | 00:01:24.682 STDOUT terraform:  + directory_permission = "0777" 2025-04-10 00:01:24.682123 | orchestrator | 00:01:24.682 STDOUT terraform:  + file_permission = "0644" 2025-04-10 00:01:24.682141 | orchestrator | 00:01:24.682 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-10 00:01:24.682179 | orchestrator | 00:01:24.682 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.682198 | orchestrator | 00:01:24.682 STDOUT terraform:  } 2025-04-10 00:01:24.682405 | orchestrator | 00:01:24.682 STDOUT terraform:  # local_file.inventory will be created 2025-04-10 00:01:24.682425 | orchestrator | 00:01:24.682 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-10 00:01:24.682461 | orchestrator | 00:01:24.682 STDOUT terraform:  + content = (known after apply) 2025-04-10 00:01:24.682495 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-10 00:01:24.682530 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-10 00:01:24.682566 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-10 00:01:24.682595 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-10 00:01:24.682634 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-10 00:01:24.682669 | orchestrator | 00:01:24.682 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-10 00:01:24.682686 | orchestrator | 00:01:24.682 STDOUT terraform:  + directory_permission = "0777" 2025-04-10 00:01:24.682702 | orchestrator | 00:01:24.682 STDOUT terraform:  + file_permission = "0644" 2025-04-10 00:01:24.682744 | orchestrator | 00:01:24.682 STDOUT terraform:  + filename = "inventory.ci" 2025-04-10 00:01:24.682782 | orchestrator | 00:01:24.682 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.682963 | orchestrator | 00:01:24.682 STDOUT terraform:  } 2025-04-10 00:01:24.682985 | orchestrator | 00:01:24.682 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-10 00:01:24.683020 | orchestrator | 00:01:24.682 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-10 00:01:24.683040 | orchestrator | 00:01:24.682 STDOUT terraform:  + content = (sensitive value) 2025-04-10 00:01:24.683057 | orchestrator | 00:01:24.683 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-10 00:01:24.683091 | orchestrator | 00:01:24.683 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-10 00:01:24.683124 | orchestrator | 00:01:24.683 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-10 00:01:24.683189 | orchestrator | 00:01:24.683 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-10 00:01:24.683208 | orchestrator | 00:01:24.683 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-10 00:01:24.683228 | orchestrator | 00:01:24.683 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-10 00:01:24.683245 | orchestrator | 00:01:24.683 STDOUT terraform:  + directory_permission = "0700" 2025-04-10 00:01:24.683275 | orchestrator | 00:01:24.683 STDOUT terraform:  + file_permission = "0600" 2025-04-10 00:01:24.683306 | orchestrator | 00:01:24.683 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-10 00:01:24.683343 | orchestrator | 00:01:24.683 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.683360 | orchestrator | 00:01:24.683 STDOUT terraform:  } 2025-04-10 00:01:24.683378 | orchestrator | 00:01:24.683 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-10 00:01:24.683413 | orchestrator | 00:01:24.683 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-10 00:01:24.683431 | orchestrator | 00:01:24.683 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.683642 | orchestrator | 00:01:24.683 STDOUT terraform:  } 2025-04-10 00:01:24.683662 | orchestrator | 00:01:24.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-10 00:01:24.683693 | orchestrator | 00:01:24.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-10 00:01:24.683723 | orchestrator | 00:01:24.683 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.683741 | orchestrator | 00:01:24.683 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.683771 | orchestrator | 00:01:24.683 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.683801 | orchestrator | 00:01:24.683 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.683832 | orchestrator | 00:01:24.683 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.683872 | orchestrator | 00:01:24.683 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-10 00:01:24.683903 | orchestrator | 00:01:24.683 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.683920 | orchestrator | 00:01:24.683 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.683938 | orchestrator | 00:01:24.683 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.683955 | orchestrator | 00:01:24.683 STDOUT terraform:  } 2025-04-10 00:01:24.684122 | orchestrator | 00:01:24.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-10 00:01:24.684192 | orchestrator | 00:01:24.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-10 00:01:24.684212 | orchestrator | 00:01:24.684 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.684229 | orchestrator | 00:01:24.684 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.684267 | orchestrator | 00:01:24.684 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.684297 | orchestrator | 00:01:24.684 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.684328 | orchestrator | 00:01:24.684 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.684367 | orchestrator | 00:01:24.684 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-10 00:01:24.684399 | orchestrator | 00:01:24.684 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.684414 | orchestrator | 00:01:24.684 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.684429 | orchestrator | 00:01:24.684 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.684443 | orchestrator | 00:01:24.684 STDOUT terraform:  } 2025-04-10 00:01:24.684576 | orchestrator | 00:01:24.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-10 00:01:24.684619 | orchestrator | 00:01:24.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-10 00:01:24.684650 | orchestrator | 00:01:24.684 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.684665 | orchestrator | 00:01:24.684 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.684699 | orchestrator | 00:01:24.684 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.684730 | orchestrator | 00:01:24.684 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.684762 | orchestrator | 00:01:24.684 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.684801 | orchestrator | 00:01:24.684 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-10 00:01:24.684841 | orchestrator | 00:01:24.684 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.684866 | orchestrator | 00:01:24.684 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.684878 | orchestrator | 00:01:24.684 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.684893 | orchestrator | 00:01:24.684 STDOUT terraform:  } 2025-04-10 00:01:24.684924 | orchestrator | 00:01:24.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-10 00:01:24.684971 | orchestrator | 00:01:24.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-10 00:01:24.685002 | orchestrator | 00:01:24.684 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.685016 | orchestrator | 00:01:24.684 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.685049 | orchestrator | 00:01:24.685 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.685082 | orchestrator | 00:01:24.685 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.685113 | orchestrator | 00:01:24.685 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.685203 | orchestrator | 00:01:24.685 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-10 00:01:24.685218 | orchestrator | 00:01:24.685 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.685229 | orchestrator | 00:01:24.685 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.685244 | orchestrator | 00:01:24.685 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.685394 | orchestrator | 00:01:24.685 STDOUT terraform:  } 2025-04-10 00:01:24.685414 | orchestrator | 00:01:24.685 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-10 00:01:24.685434 | orchestrator | 00:01:24.685 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-10 00:01:24.685467 | orchestrator | 00:01:24.685 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.685483 | orchestrator | 00:01:24.685 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.685517 | orchestrator | 00:01:24.685 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.685548 | orchestrator | 00:01:24.685 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.685578 | orchestrator | 00:01:24.685 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.685616 | orchestrator | 00:01:24.685 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-10 00:01:24.685648 | orchestrator | 00:01:24.685 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.685663 | orchestrator | 00:01:24.685 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.685679 | orchestrator | 00:01:24.685 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.685694 | orchestrator | 00:01:24.685 STDOUT terraform:  } 2025-04-10 00:01:24.685814 | orchestrator | 00:01:24.685 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-10 00:01:24.685859 | orchestrator | 00:01:24.685 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-10 00:01:24.685889 | orchestrator | 00:01:24.685 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.685917 | orchestrator | 00:01:24.685 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.685932 | orchestrator | 00:01:24.685 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.685966 | orchestrator | 00:01:24.685 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.685998 | orchestrator | 00:01:24.685 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.686057 | orchestrator | 00:01:24.685 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-10 00:01:24.686089 | orchestrator | 00:01:24.686 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.686104 | orchestrator | 00:01:24.686 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.686120 | orchestrator | 00:01:24.686 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.686135 | orchestrator | 00:01:24.686 STDOUT terraform:  } 2025-04-10 00:01:24.686272 | orchestrator | 00:01:24.686 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-10 00:01:24.686317 | orchestrator | 00:01:24.686 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-10 00:01:24.686349 | orchestrator | 00:01:24.686 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.686365 | orchestrator | 00:01:24.686 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.686399 | orchestrator | 00:01:24.686 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.686428 | orchestrator | 00:01:24.686 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.686460 | orchestrator | 00:01:24.686 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.686499 | orchestrator | 00:01:24.686 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-10 00:01:24.686530 | orchestrator | 00:01:24.686 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.686545 | orchestrator | 00:01:24.686 STDOUT terraform:  + size = 80 2025-04-10 00:01:24.686560 | orchestrator | 00:01:24.686 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.686576 | orchestrator | 00:01:24.686 STDOUT terraform:  } 2025-04-10 00:01:24.686696 | orchestrator | 00:01:24.686 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-10 00:01:24.686732 | orchestrator | 00:01:24.686 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.686763 | orchestrator | 00:01:24.686 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.686779 | orchestrator | 00:01:24.686 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.686816 | orchestrator | 00:01:24.686 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.686832 | orchestrator | 00:01:24.686 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.686875 | orchestrator | 00:01:24.686 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-10 00:01:24.686907 | orchestrator | 00:01:24.686 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.686923 | orchestrator | 00:01:24.686 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.686950 | orchestrator | 00:01:24.686 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.687057 | orchestrator | 00:01:24.686 STDOUT terraform:  } 2025-04-10 00:01:24.687073 | orchestrator | 00:01:24.687 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-10 00:01:24.687105 | orchestrator | 00:01:24.687 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.687134 | orchestrator | 00:01:24.687 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.687203 | orchestrator | 00:01:24.687 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.687218 | orchestrator | 00:01:24.687 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.687234 | orchestrator | 00:01:24.687 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.687250 | orchestrator | 00:01:24.687 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-10 00:01:24.687283 | orchestrator | 00:01:24.687 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.687299 | orchestrator | 00:01:24.687 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.687314 | orchestrator | 00:01:24.687 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.687329 | orchestrator | 00:01:24.687 STDOUT terraform:  } 2025-04-10 00:01:24.687496 | orchestrator | 00:01:24.687 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-10 00:01:24.687535 | orchestrator | 00:01:24.687 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.687566 | orchestrator | 00:01:24.687 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.687583 | orchestrator | 00:01:24.687 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.687618 | orchestrator | 00:01:24.687 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.687651 | orchestrator | 00:01:24.687 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.687688 | orchestrator | 00:01:24.687 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-10 00:01:24.687719 | orchestrator | 00:01:24.687 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.687734 | orchestrator | 00:01:24.687 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.687751 | orchestrator | 00:01:24.687 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.687766 | orchestrator | 00:01:24.687 STDOUT terraform:  } 2025-04-10 00:01:24.687863 | orchestrator | 00:01:24.687 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-10 00:01:24.687905 | orchestrator | 00:01:24.687 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.687937 | orchestrator | 00:01:24.687 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.687952 | orchestrator | 00:01:24.687 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.687986 | orchestrator | 00:01:24.687 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.688017 | orchestrator | 00:01:24.687 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.688056 | orchestrator | 00:01:24.688 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-10 00:01:24.688087 | orchestrator | 00:01:24.688 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.688103 | orchestrator | 00:01:24.688 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.688118 | orchestrator | 00:01:24.688 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.688133 | orchestrator | 00:01:24.688 STDOUT terraform:  } 2025-04-10 00:01:24.688312 | orchestrator | 00:01:24.688 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-10 00:01:24.688349 | orchestrator | 00:01:24.688 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.688382 | orchestrator | 00:01:24.688 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.688398 | orchestrator | 00:01:24.688 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.688432 | orchestrator | 00:01:24.688 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.688461 | orchestrator | 00:01:24.688 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.688499 | orchestrator | 00:01:24.688 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-10 00:01:24.688529 | orchestrator | 00:01:24.688 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.688544 | orchestrator | 00:01:24.688 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.688560 | orchestrator | 00:01:24.688 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.688575 | orchestrator | 00:01:24.688 STDOUT terraform:  } 2025-04-10 00:01:24.688620 | orchestrator | 00:01:24.688 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-10 00:01:24.688663 | orchestrator | 00:01:24.688 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.688693 | orchestrator | 00:01:24.688 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.688707 | orchestrator | 00:01:24.688 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.688743 | orchestrator | 00:01:24.688 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.688779 | orchestrator | 00:01:24.688 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.688811 | orchestrator | 00:01:24.688 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-10 00:01:24.688841 | orchestrator | 00:01:24.688 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.688855 | orchestrator | 00:01:24.688 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.688881 | orchestrator | 00:01:24.688 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.688896 | orchestrator | 00:01:24.688 STDOUT terraform:  } 2025-04-10 00:01:24.689013 | orchestrator | 00:01:24.688 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-10 00:01:24.689058 | orchestrator | 00:01:24.689 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.689090 | orchestrator | 00:01:24.689 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.689115 | orchestrator | 00:01:24.689 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.689128 | orchestrator | 00:01:24.689 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.689192 | orchestrator | 00:01:24.689 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.689228 | orchestrator | 00:01:24.689 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-10 00:01:24.689259 | orchestrator | 00:01:24.689 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.689273 | orchestrator | 00:01:24.689 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.689304 | orchestrator | 00:01:24.689 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.689354 | orchestrator | 00:01:24.689 STDOUT terraform:  } 2025-04-10 00:01:24.689368 | orchestrator | 00:01:24.689 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-10 00:01:24.689399 | orchestrator | 00:01:24.689 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.689429 | orchestrator | 00:01:24.689 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.689443 | orchestrator | 00:01:24.689 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.689480 | orchestrator | 00:01:24.689 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.689511 | orchestrator | 00:01:24.689 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.689550 | orchestrator | 00:01:24.689 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-10 00:01:24.689582 | orchestrator | 00:01:24.689 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.689597 | orchestrator | 00:01:24.689 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.689611 | orchestrator | 00:01:24.689 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.689624 | orchestrator | 00:01:24.689 STDOUT terraform:  } 2025-04-10 00:01:24.689716 | orchestrator | 00:01:24.689 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-10 00:01:24.689758 | orchestrator | 00:01:24.689 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.689789 | orchestrator | 00:01:24.689 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.689812 | orchestrator | 00:01:24.689 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.689837 | orchestrator | 00:01:24.689 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.689869 | orchestrator | 00:01:24.689 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.689907 | orchestrator | 00:01:24.689 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-10 00:01:24.689941 | orchestrator | 00:01:24.689 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.689955 | orchestrator | 00:01:24.689 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.689969 | orchestrator | 00:01:24.689 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.689983 | orchestrator | 00:01:24.689 STDOUT terraform:  } 2025-04-10 00:01:24.690128 | orchestrator | 00:01:24.690 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-10 00:01:24.690208 | orchestrator | 00:01:24.690 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.690224 | orchestrator | 00:01:24.690 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.690237 | orchestrator | 00:01:24.690 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.690250 | orchestrator | 00:01:24.690 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.690281 | orchestrator | 00:01:24.690 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.690319 | orchestrator | 00:01:24.690 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-10 00:01:24.690352 | orchestrator | 00:01:24.690 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.690365 | orchestrator | 00:01:24.690 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.690390 | orchestrator | 00:01:24.690 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.690405 | orchestrator | 00:01:24.690 STDOUT terraform:  } 2025-04-10 00:01:24.690515 | orchestrator | 00:01:24.690 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-10 00:01:24.690558 | orchestrator | 00:01:24.690 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.690587 | orchestrator | 00:01:24.690 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.690602 | orchestrator | 00:01:24.690 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.690635 | orchestrator | 00:01:24.690 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.690671 | orchestrator | 00:01:24.690 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.690707 | orchestrator | 00:01:24.690 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-10 00:01:24.690739 | orchestrator | 00:01:24.690 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.690753 | orchestrator | 00:01:24.690 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.690780 | orchestrator | 00:01:24.690 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.690794 | orchestrator | 00:01:24.690 STDOUT terraform:  } 2025-04-10 00:01:24.690836 | orchestrator | 00:01:24.690 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-10 00:01:24.690882 | orchestrator | 00:01:24.690 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.690912 | orchestrator | 00:01:24.690 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.690925 | orchestrator | 00:01:24.690 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.690960 | orchestrator | 00:01:24.690 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.690991 | orchestrator | 00:01:24.690 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.691028 | orchestrator | 00:01:24.690 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-10 00:01:24.691054 | orchestrator | 00:01:24.691 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.691068 | orchestrator | 00:01:24.691 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.691092 | orchestrator | 00:01:24.691 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.691106 | orchestrator | 00:01:24.691 STDOUT terraform:  } 2025-04-10 00:01:24.691272 | orchestrator | 00:01:24.691 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-10 00:01:24.691313 | orchestrator | 00:01:24.691 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.691346 | orchestrator | 00:01:24.691 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.691360 | orchestrator | 00:01:24.691 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.691396 | orchestrator | 00:01:24.691 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.691427 | orchestrator | 00:01:24.691 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.691465 | orchestrator | 00:01:24.691 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-10 00:01:24.691496 | orchestrator | 00:01:24.691 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.691510 | orchestrator | 00:01:24.691 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.691536 | orchestrator | 00:01:24.691 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.691550 | orchestrator | 00:01:24.691 STDOUT terraform:  } 2025-04-10 00:01:24.691593 | orchestrator | 00:01:24.691 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-10 00:01:24.691637 | orchestrator | 00:01:24.691 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.691666 | orchestrator | 00:01:24.691 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.691681 | orchestrator | 00:01:24.691 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.691714 | orchestrator | 00:01:24.691 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.691746 | orchestrator | 00:01:24.691 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.691785 | orchestrator | 00:01:24.691 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-10 00:01:24.691816 | orchestrator | 00:01:24.691 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.691830 | orchestrator | 00:01:24.691 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.691854 | orchestrator | 00:01:24.691 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.691868 | orchestrator | 00:01:24.691 STDOUT terraform:  } 2025-04-10 00:01:24.691961 | orchestrator | 00:01:24.691 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-10 00:01:24.692002 | orchestrator | 00:01:24.691 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.692033 | orchestrator | 00:01:24.691 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.692048 | orchestrator | 00:01:24.692 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.692071 | orchestrator | 00:01:24.692 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.692108 | orchestrator | 00:01:24.692 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.692145 | orchestrator | 00:01:24.692 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-10 00:01:24.692193 | orchestrator | 00:01:24.692 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.692207 | orchestrator | 00:01:24.692 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.692231 | orchestrator | 00:01:24.692 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.692245 | orchestrator | 00:01:24.692 STDOUT terraform:  } 2025-04-10 00:01:24.692287 | orchestrator | 00:01:24.692 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-10 00:01:24.692329 | orchestrator | 00:01:24.692 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.692360 | orchestrator | 00:01:24.692 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.692374 | orchestrator | 00:01:24.692 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.692407 | orchestrator | 00:01:24.692 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.692442 | orchestrator | 00:01:24.692 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.692476 | orchestrator | 00:01:24.692 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-10 00:01:24.692506 | orchestrator | 00:01:24.692 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.692519 | orchestrator | 00:01:24.692 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.692543 | orchestrator | 00:01:24.692 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.692557 | orchestrator | 00:01:24.692 STDOUT terraform:  } 2025-04-10 00:01:24.692683 | orchestrator | 00:01:24.692 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-10 00:01:24.692745 | orchestrator | 00:01:24.692 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.692777 | orchestrator | 00:01:24.692 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.692789 | orchestrator | 00:01:24.692 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.692827 | orchestrator | 00:01:24.692 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.692857 | orchestrator | 00:01:24.692 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.692897 | orchestrator | 00:01:24.692 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-10 00:01:24.692927 | orchestrator | 00:01:24.692 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.692940 | orchestrator | 00:01:24.692 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.692965 | orchestrator | 00:01:24.692 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.692979 | orchestrator | 00:01:24.692 STDOUT terraform:  } 2025-04-10 00:01:24.693021 | orchestrator | 00:01:24.692 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-10 00:01:24.693064 | orchestrator | 00:01:24.693 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-10 00:01:24.693094 | orchestrator | 00:01:24.693 STDOUT terraform:  + attachment = (known after apply) 2025-04-10 00:01:24.693107 | orchestrator | 00:01:24.693 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.693143 | orchestrator | 00:01:24.693 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.693180 | orchestrator | 00:01:24.693 STDOUT terraform:  + metadata = (known after apply) 2025-04-10 00:01:24.693221 | orchestrator | 00:01:24.693 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-10 00:01:24.693254 | orchestrator | 00:01:24.693 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.693267 | orchestrator | 00:01:24.693 STDOUT terraform:  + size = 20 2025-04-10 00:01:24.693291 | orchestrator | 00:01:24.693 STDOUT terraform:  + volume_type = "ssd" 2025-04-10 00:01:24.693304 | orchestrator | 00:01:24.693 STDOUT terraform:  } 2025-04-10 00:01:24.693671 | orchestrator | 00:01:24.693 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-10 00:01:24.693712 | orchestrator | 00:01:24.693 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-10 00:01:24.693749 | orchestrator | 00:01:24.693 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.693783 | orchestrator | 00:01:24.693 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.693819 | orchestrator | 00:01:24.693 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.693854 | orchestrator | 00:01:24.693 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.693878 | orchestrator | 00:01:24.693 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.693891 | orchestrator | 00:01:24.693 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.693931 | orchestrator | 00:01:24.693 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.693966 | orchestrator | 00:01:24.693 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.693996 | orchestrator | 00:01:24.693 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-10 00:01:24.694049 | orchestrator | 00:01:24.693 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.694064 | orchestrator | 00:01:24.694 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.694101 | orchestrator | 00:01:24.694 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.694135 | orchestrator | 00:01:24.694 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.694184 | orchestrator | 00:01:24.694 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.694215 | orchestrator | 00:01:24.694 STDOUT terraform:  + name = "testbed-manager" 2025-04-10 00:01:24.694240 | orchestrator | 00:01:24.694 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.694275 | orchestrator | 00:01:24.694 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.694310 | orchestrator | 00:01:24.694 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.694334 | orchestrator | 00:01:24.694 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.694365 | orchestrator | 00:01:24.694 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.694404 | orchestrator | 00:01:24.694 STDOUT terraform:  + user_data = (known after apply) 2025-04-10 00:01:24.694418 | orchestrator | 00:01:24.694 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.694432 | orchestrator | 00:01:24.694 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.694463 | orchestrator | 00:01:24.694 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.694492 | orchestrator | 00:01:24.694 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.694592 | orchestrator | 00:01:24.694 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.694605 | orchestrator | 00:01:24.694 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.694615 | orchestrator | 00:01:24.694 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.694632 | orchestrator | 00:01:24.694 STDOUT terraform:  } 2025-04-10 00:01:24.694645 | orchestrator | 00:01:24.694 STDOUT terraform:  + network { 2025-04-10 00:01:24.694677 | orchestrator | 00:01:24.694 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.694688 | orchestrator | 00:01:24.694 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.694703 | orchestrator | 00:01:24.694 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.694737 | orchestrator | 00:01:24.694 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.694750 | orchestrator | 00:01:24.694 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.694763 | orchestrator | 00:01:24.694 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.694800 | orchestrator | 00:01:24.694 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.694813 | orchestrator | 00:01:24.694 STDOUT terraform:  } 2025-04-10 00:01:24.695043 | orchestrator | 00:01:24.694 STDOUT terraform:  } 2025-04-10 00:01:24.695058 | orchestrator | 00:01:24.694 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-10 00:01:24.695086 | orchestrator | 00:01:24.695 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-10 00:01:24.695122 | orchestrator | 00:01:24.695 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.695173 | orchestrator | 00:01:24.695 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.695204 | orchestrator | 00:01:24.695 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.695240 | orchestrator | 00:01:24.695 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.695263 | orchestrator | 00:01:24.695 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.695277 | orchestrator | 00:01:24.695 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.695317 | orchestrator | 00:01:24.695 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.695353 | orchestrator | 00:01:24.695 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.695383 | orchestrator | 00:01:24.695 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-10 00:01:24.695406 | orchestrator | 00:01:24.695 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.695442 | orchestrator | 00:01:24.695 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.695478 | orchestrator | 00:01:24.695 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.695512 | orchestrator | 00:01:24.695 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.695536 | orchestrator | 00:01:24.695 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.695566 | orchestrator | 00:01:24.695 STDOUT terraform:  + name = "testbed-node-0" 2025-04-10 00:01:24.695592 | orchestrator | 00:01:24.695 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.695628 | orchestrator | 00:01:24.695 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.695661 | orchestrator | 00:01:24.695 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.695684 | orchestrator | 00:01:24.695 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.695720 | orchestrator | 00:01:24.695 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.695770 | orchestrator | 00:01:24.695 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-10 00:01:24.695782 | orchestrator | 00:01:24.695 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.695805 | orchestrator | 00:01:24.695 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.695835 | orchestrator | 00:01:24.695 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.695865 | orchestrator | 00:01:24.695 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.695894 | orchestrator | 00:01:24.695 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.695924 | orchestrator | 00:01:24.695 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.695962 | orchestrator | 00:01:24.695 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.695975 | orchestrator | 00:01:24.695 STDOUT terraform:  } 2025-04-10 00:01:24.695987 | orchestrator | 00:01:24.695 STDOUT terraform:  + network { 2025-04-10 00:01:24.695999 | orchestrator | 00:01:24.695 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.696030 | orchestrator | 00:01:24.695 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.696061 | orchestrator | 00:01:24.696 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.696091 | orchestrator | 00:01:24.696 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.696123 | orchestrator | 00:01:24.696 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.696177 | orchestrator | 00:01:24.696 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.696190 | orchestrator | 00:01:24.696 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.696210 | orchestrator | 00:01:24.696 STDOUT terraform:  } 2025-04-10 00:01:24.696351 | orchestrator | 00:01:24.696 STDOUT terraform:  } 2025-04-10 00:01:24.696364 | orchestrator | 00:01:24.696 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-10 00:01:24.696393 | orchestrator | 00:01:24.696 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-10 00:01:24.696432 | orchestrator | 00:01:24.696 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.696465 | orchestrator | 00:01:24.696 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.696501 | orchestrator | 00:01:24.696 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.696536 | orchestrator | 00:01:24.696 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.696560 | orchestrator | 00:01:24.696 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.696573 | orchestrator | 00:01:24.696 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.696612 | orchestrator | 00:01:24.696 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.696648 | orchestrator | 00:01:24.696 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.696683 | orchestrator | 00:01:24.696 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-10 00:01:24.696695 | orchestrator | 00:01:24.696 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.696733 | orchestrator | 00:01:24.696 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.696769 | orchestrator | 00:01:24.696 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.696803 | orchestrator | 00:01:24.696 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.696829 | orchestrator | 00:01:24.696 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.696863 | orchestrator | 00:01:24.696 STDOUT terraform:  + name = "testbed-node-1" 2025-04-10 00:01:24.696887 | orchestrator | 00:01:24.696 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.696923 | orchestrator | 00:01:24.696 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.696958 | orchestrator | 00:01:24.696 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.696983 | orchestrator | 00:01:24.696 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.697017 | orchestrator | 00:01:24.696 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.697068 | orchestrator | 00:01:24.697 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-10 00:01:24.697081 | orchestrator | 00:01:24.697 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.697106 | orchestrator | 00:01:24.697 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.697134 | orchestrator | 00:01:24.697 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.697177 | orchestrator | 00:01:24.697 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.697202 | orchestrator | 00:01:24.697 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.697231 | orchestrator | 00:01:24.697 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.697270 | orchestrator | 00:01:24.697 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.697282 | orchestrator | 00:01:24.697 STDOUT terraform:  } 2025-04-10 00:01:24.697294 | orchestrator | 00:01:24.697 STDOUT terraform:  + network { 2025-04-10 00:01:24.697305 | orchestrator | 00:01:24.697 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.697339 | orchestrator | 00:01:24.697 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.697370 | orchestrator | 00:01:24.697 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.697402 | orchestrator | 00:01:24.697 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.697433 | orchestrator | 00:01:24.697 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.697465 | orchestrator | 00:01:24.697 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.697496 | orchestrator | 00:01:24.697 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.697509 | orchestrator | 00:01:24.697 STDOUT terraform:  } 2025-04-10 00:01:24.697521 | orchestrator | 00:01:24.697 STDOUT terraform:  } 2025-04-10 00:01:24.697574 | orchestrator | 00:01:24.697 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-10 00:01:24.697615 | orchestrator | 00:01:24.697 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-10 00:01:24.697651 | orchestrator | 00:01:24.697 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.697688 | orchestrator | 00:01:24.697 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.697727 | orchestrator | 00:01:24.697 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.697756 | orchestrator | 00:01:24.697 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.697780 | orchestrator | 00:01:24.697 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.697802 | orchestrator | 00:01:24.697 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.697838 | orchestrator | 00:01:24.697 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.697873 | orchestrator | 00:01:24.697 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.697903 | orchestrator | 00:01:24.697 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-10 00:01:24.697926 | orchestrator | 00:01:24.697 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.697961 | orchestrator | 00:01:24.697 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.697996 | orchestrator | 00:01:24.697 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.698052 | orchestrator | 00:01:24.697 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.698076 | orchestrator | 00:01:24.698 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.698108 | orchestrator | 00:01:24.698 STDOUT terraform:  + name = "testbed-node-2" 2025-04-10 00:01:24.698129 | orchestrator | 00:01:24.698 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.698196 | orchestrator | 00:01:24.698 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.698209 | orchestrator | 00:01:24.698 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.698221 | orchestrator | 00:01:24.698 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.698261 | orchestrator | 00:01:24.698 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.698310 | orchestrator | 00:01:24.698 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-10 00:01:24.698323 | orchestrator | 00:01:24.698 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.698349 | orchestrator | 00:01:24.698 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.698378 | orchestrator | 00:01:24.698 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.698407 | orchestrator | 00:01:24.698 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.698435 | orchestrator | 00:01:24.698 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.698464 | orchestrator | 00:01:24.698 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.698502 | orchestrator | 00:01:24.698 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.698513 | orchestrator | 00:01:24.698 STDOUT terraform:  } 2025-04-10 00:01:24.698523 | orchestrator | 00:01:24.698 STDOUT terraform:  + network { 2025-04-10 00:01:24.698544 | orchestrator | 00:01:24.698 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.698575 | orchestrator | 00:01:24.698 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.698605 | orchestrator | 00:01:24.698 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.698637 | orchestrator | 00:01:24.698 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.698668 | orchestrator | 00:01:24.698 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.698701 | orchestrator | 00:01:24.698 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.698732 | orchestrator | 00:01:24.698 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.698744 | orchestrator | 00:01:24.698 STDOUT terraform:  } 2025-04-10 00:01:24.698755 | orchestrator | 00:01:24.698 STDOUT terraform:  } 2025-04-10 00:01:24.698800 | orchestrator | 00:01:24.698 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-10 00:01:24.698842 | orchestrator | 00:01:24.698 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-10 00:01:24.698877 | orchestrator | 00:01:24.698 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.698912 | orchestrator | 00:01:24.698 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.698948 | orchestrator | 00:01:24.698 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.698983 | orchestrator | 00:01:24.698 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.699015 | orchestrator | 00:01:24.698 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.699026 | orchestrator | 00:01:24.698 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.699055 | orchestrator | 00:01:24.699 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.699091 | orchestrator | 00:01:24.699 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.699122 | orchestrator | 00:01:24.699 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-10 00:01:24.699143 | orchestrator | 00:01:24.699 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.699192 | orchestrator | 00:01:24.699 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.699229 | orchestrator | 00:01:24.699 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.699262 | orchestrator | 00:01:24.699 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.699287 | orchestrator | 00:01:24.699 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.699319 | orchestrator | 00:01:24.699 STDOUT terraform:  + name = "testbed-node-3" 2025-04-10 00:01:24.699343 | orchestrator | 00:01:24.699 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.699378 | orchestrator | 00:01:24.699 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.699413 | orchestrator | 00:01:24.699 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.699437 | orchestrator | 00:01:24.699 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.699472 | orchestrator | 00:01:24.699 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.699521 | orchestrator | 00:01:24.699 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-10 00:01:24.699532 | orchestrator | 00:01:24.699 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.699557 | orchestrator | 00:01:24.699 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.699585 | orchestrator | 00:01:24.699 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.699615 | orchestrator | 00:01:24.699 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.699642 | orchestrator | 00:01:24.699 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.699672 | orchestrator | 00:01:24.699 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.699710 | orchestrator | 00:01:24.699 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.699721 | orchestrator | 00:01:24.699 STDOUT terraform:  } 2025-04-10 00:01:24.699732 | orchestrator | 00:01:24.699 STDOUT terraform:  + network { 2025-04-10 00:01:24.699753 | orchestrator | 00:01:24.699 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.699785 | orchestrator | 00:01:24.699 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.699814 | orchestrator | 00:01:24.699 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.699846 | orchestrator | 00:01:24.699 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.699876 | orchestrator | 00:01:24.699 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.699907 | orchestrator | 00:01:24.699 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.699938 | orchestrator | 00:01:24.699 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.699949 | orchestrator | 00:01:24.699 STDOUT terraform:  } 2025-04-10 00:01:24.699960 | orchestrator | 00:01:24.699 STDOUT terraform:  } 2025-04-10 00:01:24.700006 | orchestrator | 00:01:24.699 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-10 00:01:24.700047 | orchestrator | 00:01:24.699 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-10 00:01:24.700081 | orchestrator | 00:01:24.700 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.700115 | orchestrator | 00:01:24.700 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.700192 | orchestrator | 00:01:24.700 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.700205 | orchestrator | 00:01:24.700 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.700228 | orchestrator | 00:01:24.700 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.700248 | orchestrator | 00:01:24.700 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.700284 | orchestrator | 00:01:24.700 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.700320 | orchestrator | 00:01:24.700 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.700349 | orchestrator | 00:01:24.700 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-10 00:01:24.700372 | orchestrator | 00:01:24.700 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.700410 | orchestrator | 00:01:24.700 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.700444 | orchestrator | 00:01:24.700 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.700479 | orchestrator | 00:01:24.700 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.700504 | orchestrator | 00:01:24.700 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.700534 | orchestrator | 00:01:24.700 STDOUT terraform:  + name = "testbed-node-4" 2025-04-10 00:01:24.700559 | orchestrator | 00:01:24.700 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.700595 | orchestrator | 00:01:24.700 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.700630 | orchestrator | 00:01:24.700 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.700654 | orchestrator | 00:01:24.700 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.700688 | orchestrator | 00:01:24.700 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.700738 | orchestrator | 00:01:24.700 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-10 00:01:24.700750 | orchestrator | 00:01:24.700 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.700776 | orchestrator | 00:01:24.700 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.700797 | orchestrator | 00:01:24.700 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.700831 | orchestrator | 00:01:24.700 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.700859 | orchestrator | 00:01:24.700 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.700889 | orchestrator | 00:01:24.700 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.700928 | orchestrator | 00:01:24.700 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.700940 | orchestrator | 00:01:24.700 STDOUT terraform:  } 2025-04-10 00:01:24.700951 | orchestrator | 00:01:24.700 STDOUT terraform:  + network { 2025-04-10 00:01:24.700961 | orchestrator | 00:01:24.700 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.701001 | orchestrator | 00:01:24.700 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.701024 | orchestrator | 00:01:24.700 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.701055 | orchestrator | 00:01:24.701 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.701086 | orchestrator | 00:01:24.701 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.701119 | orchestrator | 00:01:24.701 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.701170 | orchestrator | 00:01:24.701 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.701180 | orchestrator | 00:01:24.701 STDOUT terraform:  } 2025-04-10 00:01:24.701192 | orchestrator | 00:01:24.701 STDOUT terraform:  } 2025-04-10 00:01:24.701236 | orchestrator | 00:01:24.701 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-10 00:01:24.701278 | orchestrator | 00:01:24.701 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-10 00:01:24.701311 | orchestrator | 00:01:24.701 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-10 00:01:24.701346 | orchestrator | 00:01:24.701 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-10 00:01:24.701381 | orchestrator | 00:01:24.701 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-10 00:01:24.701417 | orchestrator | 00:01:24.701 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.701443 | orchestrator | 00:01:24.701 STDOUT terraform:  + availability_zone = "nova" 2025-04-10 00:01:24.701464 | orchestrator | 00:01:24.701 STDOUT terraform:  + config_drive = true 2025-04-10 00:01:24.701500 | orchestrator | 00:01:24.701 STDOUT terraform:  + created = (known after apply) 2025-04-10 00:01:24.701535 | orchestrator | 00:01:24.701 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-10 00:01:24.701570 | orchestrator | 00:01:24.701 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-10 00:01:24.701581 | orchestrator | 00:01:24.701 STDOUT terraform:  + force_delete = false 2025-04-10 00:01:24.701619 | orchestrator | 00:01:24.701 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.701653 | orchestrator | 00:01:24.701 STDOUT terraform:  + image_id = (known after apply) 2025-04-10 00:01:24.701689 | orchestrator | 00:01:24.701 STDOUT terraform:  + image_name = (known after apply) 2025-04-10 00:01:24.701712 | orchestrator | 00:01:24.701 STDOUT terraform:  + key_pair = "testbed" 2025-04-10 00:01:24.701742 | orchestrator | 00:01:24.701 STDOUT terraform:  + name = "testbed-node-5" 2025-04-10 00:01:24.701768 | orchestrator | 00:01:24.701 STDOUT terraform:  + power_state = "active" 2025-04-10 00:01:24.701804 | orchestrator | 00:01:24.701 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.701840 | orchestrator | 00:01:24.701 STDOUT terraform:  + security_groups = (known after apply) 2025-04-10 00:01:24.701863 | orchestrator | 00:01:24.701 STDOUT terraform:  + stop_before_destroy = false 2025-04-10 00:01:24.701898 | orchestrator | 00:01:24.701 STDOUT terraform:  + updated = (known after apply) 2025-04-10 00:01:24.701946 | orchestrator | 00:01:24.701 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-10 00:01:24.701958 | orchestrator | 00:01:24.701 STDOUT terraform:  + block_device { 2025-04-10 00:01:24.701983 | orchestrator | 00:01:24.701 STDOUT terraform:  + boot_index = 0 2025-04-10 00:01:24.702010 | orchestrator | 00:01:24.701 STDOUT terraform:  + delete_on_termination = false 2025-04-10 00:01:24.702052 | orchestrator | 00:01:24.702 STDOUT terraform:  + destination_type = "volume" 2025-04-10 00:01:24.702077 | orchestrator | 00:01:24.702 STDOUT terraform:  + multiattach = false 2025-04-10 00:01:24.702107 | orchestrator | 00:01:24.702 STDOUT terraform:  + source_type = "volume" 2025-04-10 00:01:24.702145 | orchestrator | 00:01:24.702 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.702173 | orchestrator | 00:01:24.702 STDOUT terraform:  } 2025-04-10 00:01:24.702184 | orchestrator | 00:01:24.702 STDOUT terraform:  + network { 2025-04-10 00:01:24.702209 | orchestrator | 00:01:24.702 STDOUT terraform:  + access_network = false 2025-04-10 00:01:24.702241 | orchestrator | 00:01:24.702 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-10 00:01:24.702273 | orchestrator | 00:01:24.702 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-10 00:01:24.702303 | orchestrator | 00:01:24.702 STDOUT terraform:  + mac = (known after apply) 2025-04-10 00:01:24.702335 | orchestrator | 00:01:24.702 STDOUT terraform:  + name = (known after apply) 2025-04-10 00:01:24.702366 | orchestrator | 00:01:24.702 STDOUT terraform:  + port = (known after apply) 2025-04-10 00:01:24.702397 | orchestrator | 00:01:24.702 STDOUT terraform:  + uuid = (known after apply) 2025-04-10 00:01:24.702408 | orchestrator | 00:01:24.702 STDOUT terraform:  } 2025-04-10 00:01:24.702419 | orchestrator | 00:01:24.702 STDOUT terraform:  } 2025-04-10 00:01:24.702457 | orchestrator | 00:01:24.702 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-10 00:01:24.702485 | orchestrator | 00:01:24.702 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-10 00:01:24.702513 | orchestrator | 00:01:24.702 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-10 00:01:24.702543 | orchestrator | 00:01:24.702 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.702562 | orchestrator | 00:01:24.702 STDOUT terraform:  + name = "testbed" 2025-04-10 00:01:24.702587 | orchestrator | 00:01:24.702 STDOUT terraform:  + private_key = (sensitive value) 2025-04-10 00:01:24.702614 | orchestrator | 00:01:24.702 STDOUT terraform:  + public_key = (known after apply) 2025-04-10 00:01:24.702642 | orchestrator | 00:01:24.702 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.702670 | orchestrator | 00:01:24.702 STDOUT terraform:  + user_id = (known after apply) 2025-04-10 00:01:24.702692 | orchestrator | 00:01:24.702 STDOUT terraform:  } 2025-04-10 00:01:24.702743 | orchestrator | 00:01:24.702 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-10 00:01:24.702790 | orchestrator | 00:01:24.702 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.702818 | orchestrator | 00:01:24.702 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.702847 | orchestrator | 00:01:24.702 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.702875 | orchestrator | 00:01:24.702 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.702903 | orchestrator | 00:01:24.702 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.702931 | orchestrator | 00:01:24.702 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.702942 | orchestrator | 00:01:24.702 STDOUT terraform:  } 2025-04-10 00:01:24.703001 | orchestrator | 00:01:24.702 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-10 00:01:24.703049 | orchestrator | 00:01:24.702 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.703079 | orchestrator | 00:01:24.703 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.703113 | orchestrator | 00:01:24.703 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.703142 | orchestrator | 00:01:24.703 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.703200 | orchestrator | 00:01:24.703 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.703215 | orchestrator | 00:01:24.703 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.703226 | orchestrator | 00:01:24.703 STDOUT terraform:  } 2025-04-10 00:01:24.703260 | orchestrator | 00:01:24.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-10 00:01:24.703308 | orchestrator | 00:01:24.703 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.703337 | orchestrator | 00:01:24.703 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.703366 | orchestrator | 00:01:24.703 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.703394 | orchestrator | 00:01:24.703 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.703423 | orchestrator | 00:01:24.703 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.703454 | orchestrator | 00:01:24.703 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.703475 | orchestrator | 00:01:24.703 STDOUT terraform:  } 2025-04-10 00:01:24.703511 | orchestrator | 00:01:24.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-10 00:01:24.703560 | orchestrator | 00:01:24.703 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.703588 | orchestrator | 00:01:24.703 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.703616 | orchestrator | 00:01:24.703 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.703643 | orchestrator | 00:01:24.703 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.703674 | orchestrator | 00:01:24.703 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.703701 | orchestrator | 00:01:24.703 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.703714 | orchestrator | 00:01:24.703 STDOUT terraform:  } 2025-04-10 00:01:24.703759 | orchestrator | 00:01:24.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-10 00:01:24.703808 | orchestrator | 00:01:24.703 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.703836 | orchestrator | 00:01:24.703 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.703865 | orchestrator | 00:01:24.703 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.703893 | orchestrator | 00:01:24.703 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.703922 | orchestrator | 00:01:24.703 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.703949 | orchestrator | 00:01:24.703 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.703960 | orchestrator | 00:01:24.703 STDOUT terraform:  } 2025-04-10 00:01:24.704009 | orchestrator | 00:01:24.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-10 00:01:24.704057 | orchestrator | 00:01:24.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.704084 | orchestrator | 00:01:24.704 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.704114 | orchestrator | 00:01:24.704 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.704143 | orchestrator | 00:01:24.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.704182 | orchestrator | 00:01:24.704 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.704211 | orchestrator | 00:01:24.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.704222 | orchestrator | 00:01:24.704 STDOUT terraform:  } 2025-04-10 00:01:24.704270 | orchestrator | 00:01:24.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-10 00:01:24.704318 | orchestrator | 00:01:24.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.704346 | orchestrator | 00:01:24.704 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.704376 | orchestrator | 00:01:24.704 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.704404 | orchestrator | 00:01:24.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.704424 | orchestrator | 00:01:24.704 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.704456 | orchestrator | 00:01:24.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.704468 | orchestrator | 00:01:24.704 STDOUT terraform:  } 2025-04-10 00:01:24.704515 | orchestrator | 00:01:24.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-10 00:01:24.704564 | orchestrator | 00:01:24.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.704593 | orchestrator | 00:01:24.704 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.704622 | orchestrator | 00:01:24.704 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.704649 | orchestrator | 00:01:24.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.704683 | orchestrator | 00:01:24.704 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.704705 | orchestrator | 00:01:24.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.704716 | orchestrator | 00:01:24.704 STDOUT terraform:  } 2025-04-10 00:01:24.704764 | orchestrator | 00:01:24.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-10 00:01:24.704812 | orchestrator | 00:01:24.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.704840 | orchestrator | 00:01:24.704 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.704870 | orchestrator | 00:01:24.704 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.704898 | orchestrator | 00:01:24.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.704925 | orchestrator | 00:01:24.704 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.704954 | orchestrator | 00:01:24.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.704964 | orchestrator | 00:01:24.704 STDOUT terraform:  } 2025-04-10 00:01:24.705013 | orchestrator | 00:01:24.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-10 00:01:24.705062 | orchestrator | 00:01:24.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.705090 | orchestrator | 00:01:24.705 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.705119 | orchestrator | 00:01:24.705 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.705170 | orchestrator | 00:01:24.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.705182 | orchestrator | 00:01:24.705 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.705211 | orchestrator | 00:01:24.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.705223 | orchestrator | 00:01:24.705 STDOUT terraform:  } 2025-04-10 00:01:24.705271 | orchestrator | 00:01:24.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-10 00:01:24.705319 | orchestrator | 00:01:24.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.705342 | orchestrator | 00:01:24.705 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.705373 | orchestrator | 00:01:24.705 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.705400 | orchestrator | 00:01:24.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.705429 | orchestrator | 00:01:24.705 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.705456 | orchestrator | 00:01:24.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.705468 | orchestrator | 00:01:24.705 STDOUT terraform:  } 2025-04-10 00:01:24.705516 | orchestrator | 00:01:24.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-10 00:01:24.705566 | orchestrator | 00:01:24.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.705595 | orchestrator | 00:01:24.705 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.705624 | orchestrator | 00:01:24.705 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.705652 | orchestrator | 00:01:24.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.705681 | orchestrator | 00:01:24.705 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.705709 | orchestrator | 00:01:24.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.705720 | orchestrator | 00:01:24.705 STDOUT terraform:  } 2025-04-10 00:01:24.705779 | orchestrator | 00:01:24.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-10 00:01:24.705827 | orchestrator | 00:01:24.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.705855 | orchestrator | 00:01:24.705 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.705883 | orchestrator | 00:01:24.705 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.705911 | orchestrator | 00:01:24.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.705940 | orchestrator | 00:01:24.705 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.705969 | orchestrator | 00:01:24.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.705980 | orchestrator | 00:01:24.705 STDOUT terraform:  } 2025-04-10 00:01:24.706047 | orchestrator | 00:01:24.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-10 00:01:24.706088 | orchestrator | 00:01:24.706 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.706118 | orchestrator | 00:01:24.706 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.706164 | orchestrator | 00:01:24.706 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.706199 | orchestrator | 00:01:24.706 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.706228 | orchestrator | 00:01:24.706 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.706258 | orchestrator | 00:01:24.706 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.706277 | orchestrator | 00:01:24.706 STDOUT terraform:  } 2025-04-10 00:01:24.706317 | orchestrator | 00:01:24.706 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-10 00:01:24.706365 | orchestrator | 00:01:24.706 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.706394 | orchestrator | 00:01:24.706 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.706422 | orchestrator | 00:01:24.706 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.706448 | orchestrator | 00:01:24.706 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.706478 | orchestrator | 00:01:24.706 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.706506 | orchestrator | 00:01:24.706 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.706518 | orchestrator | 00:01:24.706 STDOUT terraform:  } 2025-04-10 00:01:24.706568 | orchestrator | 00:01:24.706 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-10 00:01:24.706614 | orchestrator | 00:01:24.706 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.706643 | orchestrator | 00:01:24.706 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.706675 | orchestrator | 00:01:24.706 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.706704 | orchestrator | 00:01:24.706 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.706732 | orchestrator | 00:01:24.706 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.706759 | orchestrator | 00:01:24.706 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.706767 | orchestrator | 00:01:24.706 STDOUT terraform:  } 2025-04-10 00:01:24.706885 | orchestrator | 00:01:24.706 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-10 00:01:24.706939 | orchestrator | 00:01:24.706 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.706970 | orchestrator | 00:01:24.706 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.707000 | orchestrator | 00:01:24.706 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.707030 | orchestrator | 00:01:24.706 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.707058 | orchestrator | 00:01:24.707 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.707086 | orchestrator | 00:01:24.707 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.707093 | orchestrator | 00:01:24.707 STDOUT terraform:  } 2025-04-10 00:01:24.707178 | orchestrator | 00:01:24.707 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-10 00:01:24.707209 | orchestrator | 00:01:24.707 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-10 00:01:24.707238 | orchestrator | 00:01:24.707 STDOUT terraform:  + device = (known after apply) 2025-04-10 00:01:24.707266 | orchestrator | 00:01:24.707 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.707294 | orchestrator | 00:01:24.707 STDOUT terraform:  + instance_id = (known after apply) 2025-04-10 00:01:24.707324 | orchestrator | 00:01:24.707 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.707351 | orchestrator | 00:01:24.707 STDOUT terraform:  + volume_id = (known after apply) 2025-04-10 00:01:24.707358 | orchestrator | 00:01:24.707 STDOUT terraform:  } 2025-04-10 00:01:24.707415 | orchestrator | 00:01:24.707 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-10 00:01:24.707468 | orchestrator | 00:01:24.707 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-10 00:01:24.707496 | orchestrator | 00:01:24.707 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-10 00:01:24.707524 | orchestrator | 00:01:24.707 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-10 00:01:24.707554 | orchestrator | 00:01:24.707 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.707581 | orchestrator | 00:01:24.707 STDOUT terraform:  + port_id = (known after apply) 2025-04-10 00:01:24.707613 | orchestrator | 00:01:24.707 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.707620 | orchestrator | 00:01:24.707 STDOUT terraform:  } 2025-04-10 00:01:24.707666 | orchestrator | 00:01:24.707 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-10 00:01:24.707713 | orchestrator | 00:01:24.707 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-10 00:01:24.707738 | orchestrator | 00:01:24.707 STDOUT terraform:  + address = (known after apply) 2025-04-10 00:01:24.707763 | orchestrator | 00:01:24.707 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.707787 | orchestrator | 00:01:24.707 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-10 00:01:24.707812 | orchestrator | 00:01:24.707 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.707839 | orchestrator | 00:01:24.707 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-10 00:01:24.707865 | orchestrator | 00:01:24.707 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.707886 | orchestrator | 00:01:24.707 STDOUT terraform:  + pool = "public" 2025-04-10 00:01:24.707912 | orchestrator | 00:01:24.707 STDOUT terraform:  + port_id = (known after apply) 2025-04-10 00:01:24.707937 | orchestrator | 00:01:24.707 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.707961 | orchestrator | 00:01:24.707 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.707985 | orchestrator | 00:01:24.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.707993 | orchestrator | 00:01:24.707 STDOUT terraform:  } 2025-04-10 00:01:24.708037 | orchestrator | 00:01:24.707 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-10 00:01:24.708082 | orchestrator | 00:01:24.708 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-10 00:01:24.708118 | orchestrator | 00:01:24.708 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.708167 | orchestrator | 00:01:24.708 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.708180 | orchestrator | 00:01:24.708 STDOUT terraform:  + availability_zone_hints = [ 2025-04-10 00:01:24.708199 | orchestrator | 00:01:24.708 STDOUT terraform:  + "nova", 2025-04-10 00:01:24.708206 | orchestrator | 00:01:24.708 STDOUT terraform:  ] 2025-04-10 00:01:24.708246 | orchestrator | 00:01:24.708 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-10 00:01:24.708281 | orchestrator | 00:01:24.708 STDOUT terraform:  + external = (known after apply) 2025-04-10 00:01:24.708318 | orchestrator | 00:01:24.708 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.708356 | orchestrator | 00:01:24.708 STDOUT terraform:  + mtu = (known after apply) 2025-04-10 00:01:24.708394 | orchestrator | 00:01:24.708 STDOUT terraform:  + name = "net-testbed-management" 2025-04-10 00:01:24.708429 | orchestrator | 00:01:24.708 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.708467 | orchestrator | 00:01:24.708 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.708504 | orchestrator | 00:01:24.708 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.708540 | orchestrator | 00:01:24.708 STDOUT terraform:  + shared = (known after apply) 2025-04-10 00:01:24.708577 | orchestrator | 00:01:24.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.708612 | orchestrator | 00:01:24.708 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-10 00:01:24.708636 | orchestrator | 00:01:24.708 STDOUT terraform:  + segments (known after apply) 2025-04-10 00:01:24.708644 | orchestrator | 00:01:24.708 STDOUT terraform:  } 2025-04-10 00:01:24.708693 | orchestrator | 00:01:24.708 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-10 00:01:24.708739 | orchestrator | 00:01:24.708 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-10 00:01:24.708775 | orchestrator | 00:01:24.708 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.708810 | orchestrator | 00:01:24.708 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.708845 | orchestrator | 00:01:24.708 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.708881 | orchestrator | 00:01:24.708 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.708917 | orchestrator | 00:01:24.708 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.708953 | orchestrator | 00:01:24.708 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.708987 | orchestrator | 00:01:24.708 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.709024 | orchestrator | 00:01:24.708 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.709060 | orchestrator | 00:01:24.709 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.709096 | orchestrator | 00:01:24.709 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.709138 | orchestrator | 00:01:24.709 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.709185 | orchestrator | 00:01:24.709 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.709207 | orchestrator | 00:01:24.709 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.709242 | orchestrator | 00:01:24.709 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.709286 | orchestrator | 00:01:24.709 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.709340 | orchestrator | 00:01:24.709 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.709377 | orchestrator | 00:01:24.709 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.709407 | orchestrator | 00:01:24.709 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.709415 | orchestrator | 00:01:24.709 STDOUT terraform:  } 2025-04-10 00:01:24.709438 | orchestrator | 00:01:24.709 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.709468 | orchestrator | 00:01:24.709 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.709475 | orchestrator | 00:01:24.709 STDOUT terraform:  } 2025-04-10 00:01:24.709502 | orchestrator | 00:01:24.709 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.709509 | orchestrator | 00:01:24.709 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.709538 | orchestrator | 00:01:24.709 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-10 00:01:24.709567 | orchestrator | 00:01:24.709 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.709583 | orchestrator | 00:01:24.709 STDOUT terraform:  } 2025-04-10 00:01:24.709590 | orchestrator | 00:01:24.709 STDOUT terraform:  } 2025-04-10 00:01:24.709668 | orchestrator | 00:01:24.709 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-10 00:01:24.709711 | orchestrator | 00:01:24.709 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-10 00:01:24.709748 | orchestrator | 00:01:24.709 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.709784 | orchestrator | 00:01:24.709 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.709830 | orchestrator | 00:01:24.709 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.709857 | orchestrator | 00:01:24.709 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.709895 | orchestrator | 00:01:24.709 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.709933 | orchestrator | 00:01:24.709 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.709965 | orchestrator | 00:01:24.709 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.710001 | orchestrator | 00:01:24.709 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.710050 | orchestrator | 00:01:24.709 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.710085 | orchestrator | 00:01:24.710 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.710121 | orchestrator | 00:01:24.710 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.710179 | orchestrator | 00:01:24.710 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.710214 | orchestrator | 00:01:24.710 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.710250 | orchestrator | 00:01:24.710 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.710287 | orchestrator | 00:01:24.710 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.710327 | orchestrator | 00:01:24.710 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.710337 | orchestrator | 00:01:24.710 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.710370 | orchestrator | 00:01:24.710 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.710378 | orchestrator | 00:01:24.710 STDOUT terraform:  } 2025-04-10 00:01:24.710400 | orchestrator | 00:01:24.710 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.710429 | orchestrator | 00:01:24.710 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-10 00:01:24.710437 | orchestrator | 00:01:24.710 STDOUT terraform:  } 2025-04-10 00:01:24.710459 | orchestrator | 00:01:24.710 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.710487 | orchestrator | 00:01:24.710 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.710503 | orchestrator | 00:01:24.710 STDOUT terraform:  } 2025-04-10 00:01:24.710523 | orchestrator | 00:01:24.710 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.710552 | orchestrator | 00:01:24.710 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-10 00:01:24.710559 | orchestrator | 00:01:24.710 STDOUT terraform:  } 2025-04-10 00:01:24.710587 | orchestrator | 00:01:24.710 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.710594 | orchestrator | 00:01:24.710 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.710622 | orchestrator | 00:01:24.710 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-10 00:01:24.710651 | orchestrator | 00:01:24.710 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.710659 | orchestrator | 00:01:24.710 STDOUT terraform:  } 2025-04-10 00:01:24.710675 | orchestrator | 00:01:24.710 STDOUT terraform:  } 2025-04-10 00:01:24.710723 | orchestrator | 00:01:24.710 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-10 00:01:24.710769 | orchestrator | 00:01:24.710 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-10 00:01:24.710804 | orchestrator | 00:01:24.710 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.710839 | orchestrator | 00:01:24.710 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.710873 | orchestrator | 00:01:24.710 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.710912 | orchestrator | 00:01:24.710 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.710947 | orchestrator | 00:01:24.710 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.710983 | orchestrator | 00:01:24.710 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.711019 | orchestrator | 00:01:24.710 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.711055 | orchestrator | 00:01:24.711 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.711089 | orchestrator | 00:01:24.711 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.711126 | orchestrator | 00:01:24.711 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.711217 | orchestrator | 00:01:24.711 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.711226 | orchestrator | 00:01:24.711 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.711279 | orchestrator | 00:01:24.711 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.711316 | orchestrator | 00:01:24.711 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.711352 | orchestrator | 00:01:24.711 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.711386 | orchestrator | 00:01:24.711 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.711410 | orchestrator | 00:01:24.711 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.711440 | orchestrator | 00:01:24.711 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.711452 | orchestrator | 00:01:24.711 STDOUT terraform:  } 2025-04-10 00:01:24.711461 | orchestrator | 00:01:24.711 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.711495 | orchestrator | 00:01:24.711 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-10 00:01:24.711506 | orchestrator | 00:01:24.711 STDOUT terraform:  } 2025-04-10 00:01:24.711516 | orchestrator | 00:01:24.711 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.711549 | orchestrator | 00:01:24.711 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.711560 | orchestrator | 00:01:24.711 STDOUT terraform:  } 2025-04-10 00:01:24.711571 | orchestrator | 00:01:24.711 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.711604 | orchestrator | 00:01:24.711 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-10 00:01:24.711616 | orchestrator | 00:01:24.711 STDOUT terraform:  } 2025-04-10 00:01:24.711637 | orchestrator | 00:01:24.711 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.711648 | orchestrator | 00:01:24.711 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.711675 | orchestrator | 00:01:24.711 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-10 00:01:24.711706 | orchestrator | 00:01:24.711 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.711717 | orchestrator | 00:01:24.711 STDOUT terraform:  } 2025-04-10 00:01:24.711727 | orchestrator | 00:01:24.711 STDOUT terraform:  } 2025-04-10 00:01:24.711772 | orchestrator | 00:01:24.711 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-10 00:01:24.711819 | orchestrator | 00:01:24.711 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-10 00:01:24.711847 | orchestrator | 00:01:24.711 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.711895 | orchestrator | 00:01:24.711 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.711929 | orchestrator | 00:01:24.711 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.711966 | orchestrator | 00:01:24.711 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.712001 | orchestrator | 00:01:24.711 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.712036 | orchestrator | 00:01:24.711 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.712072 | orchestrator | 00:01:24.712 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.712109 | orchestrator | 00:01:24.712 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.712168 | orchestrator | 00:01:24.712 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.712195 | orchestrator | 00:01:24.712 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.712229 | orchestrator | 00:01:24.712 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.712265 | orchestrator | 00:01:24.712 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.712299 | orchestrator | 00:01:24.712 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.712341 | orchestrator | 00:01:24.712 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.712421 | orchestrator | 00:01:24.712 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.713228 | orchestrator | 00:01:24.712 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.713257 | orchestrator | 00:01:24.713 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.713274 | orchestrator | 00:01:24.713 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.713282 | orchestrator | 00:01:24.713 STDOUT terraform:  } 2025-04-10 00:01:24.713296 | orchestrator | 00:01:24.713 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.713326 | orchestrator | 00:01:24.713 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-10 00:01:24.713341 | orchestrator | 00:01:24.713 STDOUT terraform:  } 2025-04-10 00:01:24.713363 | orchestrator | 00:01:24.713 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.713393 | orchestrator | 00:01:24.713 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.713408 | orchestrator | 00:01:24.713 STDOUT terraform:  } 2025-04-10 00:01:24.713429 | orchestrator | 00:01:24.713 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.713458 | orchestrator | 00:01:24.713 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-10 00:01:24.713465 | orchestrator | 00:01:24.713 STDOUT terraform:  } 2025-04-10 00:01:24.713492 | orchestrator | 00:01:24.713 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.713508 | orchestrator | 00:01:24.713 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.713536 | orchestrator | 00:01:24.713 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-10 00:01:24.713562 | orchestrator | 00:01:24.713 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.713570 | orchestrator | 00:01:24.713 STDOUT terraform:  } 2025-04-10 00:01:24.713585 | orchestrator | 00:01:24.713 STDOUT terraform:  } 2025-04-10 00:01:24.713631 | orchestrator | 00:01:24.713 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-10 00:01:24.713675 | orchestrator | 00:01:24.713 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-10 00:01:24.713712 | orchestrator | 00:01:24.713 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.713748 | orchestrator | 00:01:24.713 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.713782 | orchestrator | 00:01:24.713 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.713819 | orchestrator | 00:01:24.713 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.713855 | orchestrator | 00:01:24.713 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.713890 | orchestrator | 00:01:24.713 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.713925 | orchestrator | 00:01:24.713 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.713961 | orchestrator | 00:01:24.713 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.713998 | orchestrator | 00:01:24.713 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.714060 | orchestrator | 00:01:24.713 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.714096 | orchestrator | 00:01:24.714 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.714132 | orchestrator | 00:01:24.714 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.714196 | orchestrator | 00:01:24.714 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.714229 | orchestrator | 00:01:24.714 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.714264 | orchestrator | 00:01:24.714 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.714304 | orchestrator | 00:01:24.714 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.714315 | orchestrator | 00:01:24.714 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.714348 | orchestrator | 00:01:24.714 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.714358 | orchestrator | 00:01:24.714 STDOUT terraform:  } 2025-04-10 00:01:24.714377 | orchestrator | 00:01:24.714 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.714406 | orchestrator | 00:01:24.714 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-10 00:01:24.714422 | orchestrator | 00:01:24.714 STDOUT terraform:  } 2025-04-10 00:01:24.714436 | orchestrator | 00:01:24.714 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.714465 | orchestrator | 00:01:24.714 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.714473 | orchestrator | 00:01:24.714 STDOUT terraform:  } 2025-04-10 00:01:24.714495 | orchestrator | 00:01:24.714 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.714523 | orchestrator | 00:01:24.714 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-10 00:01:24.714530 | orchestrator | 00:01:24.714 STDOUT terraform:  } 2025-04-10 00:01:24.714556 | orchestrator | 00:01:24.714 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.714565 | orchestrator | 00:01:24.714 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.714592 | orchestrator | 00:01:24.714 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-10 00:01:24.714623 | orchestrator | 00:01:24.714 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.714630 | orchestrator | 00:01:24.714 STDOUT terraform:  } 2025-04-10 00:01:24.714646 | orchestrator | 00:01:24.714 STDOUT terraform:  } 2025-04-10 00:01:24.714691 | orchestrator | 00:01:24.714 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-10 00:01:24.714736 | orchestrator | 00:01:24.714 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-10 00:01:24.714771 | orchestrator | 00:01:24.714 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.714808 | orchestrator | 00:01:24.714 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.714844 | orchestrator | 00:01:24.714 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.714881 | orchestrator | 00:01:24.714 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.714917 | orchestrator | 00:01:24.714 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.714952 | orchestrator | 00:01:24.714 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.714986 | orchestrator | 00:01:24.714 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.715045 | orchestrator | 00:01:24.714 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.715054 | orchestrator | 00:01:24.715 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.715091 | orchestrator | 00:01:24.715 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.715127 | orchestrator | 00:01:24.715 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.715176 | orchestrator | 00:01:24.715 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.715211 | orchestrator | 00:01:24.715 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.715249 | orchestrator | 00:01:24.715 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.715284 | orchestrator | 00:01:24.715 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.715321 | orchestrator | 00:01:24.715 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.715333 | orchestrator | 00:01:24.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.715364 | orchestrator | 00:01:24.715 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.715380 | orchestrator | 00:01:24.715 STDOUT terraform:  } 2025-04-10 00:01:24.715403 | orchestrator | 00:01:24.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.715414 | orchestrator | 00:01:24.715 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-10 00:01:24.715423 | orchestrator | 00:01:24.715 STDOUT terraform:  } 2025-04-10 00:01:24.715446 | orchestrator | 00:01:24.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.715474 | orchestrator | 00:01:24.715 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.715485 | orchestrator | 00:01:24.715 STDOUT terraform:  } 2025-04-10 00:01:24.715504 | orchestrator | 00:01:24.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.715532 | orchestrator | 00:01:24.715 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-10 00:01:24.715544 | orchestrator | 00:01:24.715 STDOUT terraform:  } 2025-04-10 00:01:24.715567 | orchestrator | 00:01:24.715 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.715578 | orchestrator | 00:01:24.715 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.715604 | orchestrator | 00:01:24.715 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-10 00:01:24.715631 | orchestrator | 00:01:24.715 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.715639 | orchestrator | 00:01:24.715 STDOUT terraform:  } 2025-04-10 00:01:24.715655 | orchestrator | 00:01:24.715 STDOUT terraform:  } 2025-04-10 00:01:24.715704 | orchestrator | 00:01:24.715 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-10 00:01:24.715749 | orchestrator | 00:01:24.715 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-10 00:01:24.715781 | orchestrator | 00:01:24.715 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.715816 | orchestrator | 00:01:24.715 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-10 00:01:24.715850 | orchestrator | 00:01:24.715 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-10 00:01:24.715886 | orchestrator | 00:01:24.715 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.715922 | orchestrator | 00:01:24.715 STDOUT terraform:  + device_id = (known after apply) 2025-04-10 00:01:24.715958 | orchestrator | 00:01:24.715 STDOUT terraform:  + device_owner = (known after apply) 2025-04-10 00:01:24.715994 | orchestrator | 00:01:24.715 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-10 00:01:24.716030 | orchestrator | 00:01:24.715 STDOUT terraform:  + dns_name = (known after apply) 2025-04-10 00:01:24.716066 | orchestrator | 00:01:24.716 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.716101 | orchestrator | 00:01:24.716 STDOUT terraform:  + mac_address = (known after apply) 2025-04-10 00:01:24.716138 | orchestrator | 00:01:24.716 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.716190 | orchestrator | 00:01:24.716 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-10 00:01:24.716222 | orchestrator | 00:01:24.716 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-10 00:01:24.716257 | orchestrator | 00:01:24.716 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.716293 | orchestrator | 00:01:24.716 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-10 00:01:24.716328 | orchestrator | 00:01:24.716 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.716349 | orchestrator | 00:01:24.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.716378 | orchestrator | 00:01:24.716 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-10 00:01:24.716386 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716409 | orchestrator | 00:01:24.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.716437 | orchestrator | 00:01:24.716 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-10 00:01:24.716444 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716467 | orchestrator | 00:01:24.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.716495 | orchestrator | 00:01:24.716 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-10 00:01:24.716502 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716529 | orchestrator | 00:01:24.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-10 00:01:24.716559 | orchestrator | 00:01:24.716 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-10 00:01:24.716566 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716594 | orchestrator | 00:01:24.716 STDOUT terraform:  + binding (known after apply) 2025-04-10 00:01:24.716601 | orchestrator | 00:01:24.716 STDOUT terraform:  + fixed_ip { 2025-04-10 00:01:24.716628 | orchestrator | 00:01:24.716 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-10 00:01:24.716661 | orchestrator | 00:01:24.716 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.716668 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716675 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716725 | orchestrator | 00:01:24.716 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-10 00:01:24.716776 | orchestrator | 00:01:24.716 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-10 00:01:24.716792 | orchestrator | 00:01:24.716 STDOUT terraform:  + force_destroy = false 2025-04-10 00:01:24.716823 | orchestrator | 00:01:24.716 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.716852 | orchestrator | 00:01:24.716 STDOUT terraform:  + port_id = (known after apply) 2025-04-10 00:01:24.716881 | orchestrator | 00:01:24.716 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.716910 | orchestrator | 00:01:24.716 STDOUT terraform:  + router_id = (known after apply) 2025-04-10 00:01:24.716940 | orchestrator | 00:01:24.716 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-10 00:01:24.716947 | orchestrator | 00:01:24.716 STDOUT terraform:  } 2025-04-10 00:01:24.716984 | orchestrator | 00:01:24.716 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-10 00:01:24.717021 | orchestrator | 00:01:24.716 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-10 00:01:24.717058 | orchestrator | 00:01:24.717 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-10 00:01:24.717094 | orchestrator | 00:01:24.717 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.717117 | orchestrator | 00:01:24.717 STDOUT terraform:  + availability_zone_hints = [ 2025-04-10 00:01:24.717125 | orchestrator | 00:01:24.717 STDOUT terraform:  + "nova", 2025-04-10 00:01:24.717141 | orchestrator | 00:01:24.717 STDOUT terraform:  ] 2025-04-10 00:01:24.717200 | orchestrator | 00:01:24.717 STDOUT terraform:  + distributed = (known after apply) 2025-04-10 00:01:24.717236 | orchestrator | 00:01:24.717 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-10 00:01:24.717285 | orchestrator | 00:01:24.717 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-10 00:01:24.717322 | orchestrator | 00:01:24.717 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.717352 | orchestrator | 00:01:24.717 STDOUT terraform:  + name = "testbed" 2025-04-10 00:01:24.717388 | orchestrator | 00:01:24.717 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.717426 | orchestrator | 00:01:24.717 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.717455 | orchestrator | 00:01:24.717 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-10 00:01:24.717462 | orchestrator | 00:01:24.717 STDOUT terraform:  } 2025-04-10 00:01:24.717517 | orchestrator | 00:01:24.717 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-10 00:01:24.717569 | orchestrator | 00:01:24.717 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-10 00:01:24.717591 | orchestrator | 00:01:24.717 STDOUT terraform:  + description = "ssh" 2025-04-10 00:01:24.717614 | orchestrator | 00:01:24.717 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.717637 | orchestrator | 00:01:24.717 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.717670 | orchestrator | 00:01:24.717 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.717690 | orchestrator | 00:01:24.717 STDOUT terraform:  + port_range_max = 22 2025-04-10 00:01:24.717710 | orchestrator | 00:01:24.717 STDOUT terraform:  + port_range_min = 22 2025-04-10 00:01:24.717732 | orchestrator | 00:01:24.717 STDOUT terraform:  + protocol = "tcp" 2025-04-10 00:01:24.717764 | orchestrator | 00:01:24.717 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.717794 | orchestrator | 00:01:24.717 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.717819 | orchestrator | 00:01:24.717 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.717848 | orchestrator | 00:01:24.717 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.717879 | orchestrator | 00:01:24.717 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.717886 | orchestrator | 00:01:24.717 STDOUT terraform:  } 2025-04-10 00:01:24.717940 | orchestrator | 00:01:24.717 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-10 00:01:24.717994 | orchestrator | 00:01:24.717 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-10 00:01:24.718045 | orchestrator | 00:01:24.717 STDOUT terraform:  + description = "wireguard" 2025-04-10 00:01:24.718054 | orchestrator | 00:01:24.718 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.718072 | orchestrator | 00:01:24.718 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.718104 | orchestrator | 00:01:24.718 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.718124 | orchestrator | 00:01:24.718 STDOUT terraform:  + port_range_max = 51820 2025-04-10 00:01:24.718144 | orchestrator | 00:01:24.718 STDOUT terraform:  + port_range_min = 51820 2025-04-10 00:01:24.718174 | orchestrator | 00:01:24.718 STDOUT terraform:  + protocol = "udp" 2025-04-10 00:01:24.718205 | orchestrator | 00:01:24.718 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.718234 | orchestrator | 00:01:24.718 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.718259 | orchestrator | 00:01:24.718 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.718293 | orchestrator | 00:01:24.718 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.718323 | orchestrator | 00:01:24.718 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.718335 | orchestrator | 00:01:24.718 STDOUT terraform:  } 2025-04-10 00:01:24.718383 | orchestrator | 00:01:24.718 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-10 00:01:24.718435 | orchestrator | 00:01:24.718 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-10 00:01:24.718459 | orchestrator | 00:01:24.718 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.718480 | orchestrator | 00:01:24.718 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.718512 | orchestrator | 00:01:24.718 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.718533 | orchestrator | 00:01:24.718 STDOUT terraform:  + protocol = "tcp" 2025-04-10 00:01:24.718563 | orchestrator | 00:01:24.718 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.718594 | orchestrator | 00:01:24.718 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.718623 | orchestrator | 00:01:24.718 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-10 00:01:24.718653 | orchestrator | 00:01:24.718 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.718683 | orchestrator | 00:01:24.718 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.718690 | orchestrator | 00:01:24.718 STDOUT terraform:  } 2025-04-10 00:01:24.718746 | orchestrator | 00:01:24.718 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-10 00:01:24.718798 | orchestrator | 00:01:24.718 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-10 00:01:24.718823 | orchestrator | 00:01:24.718 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.718844 | orchestrator | 00:01:24.718 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.718874 | orchestrator | 00:01:24.718 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.718896 | orchestrator | 00:01:24.718 STDOUT terraform:  + protocol = "udp" 2025-04-10 00:01:24.718925 | orchestrator | 00:01:24.718 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.718955 | orchestrator | 00:01:24.718 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.718987 | orchestrator | 00:01:24.718 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-10 00:01:24.719015 | orchestrator | 00:01:24.718 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.719046 | orchestrator | 00:01:24.719 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.719053 | orchestrator | 00:01:24.719 STDOUT terraform:  } 2025-04-10 00:01:24.719108 | orchestrator | 00:01:24.719 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-10 00:01:24.719195 | orchestrator | 00:01:24.719 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-10 00:01:24.719218 | orchestrator | 00:01:24.719 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.719225 | orchestrator | 00:01:24.719 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.719251 | orchestrator | 00:01:24.719 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.719272 | orchestrator | 00:01:24.719 STDOUT terraform:  + protocol = "icmp" 2025-04-10 00:01:24.719303 | orchestrator | 00:01:24.719 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.719334 | orchestrator | 00:01:24.719 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.719359 | orchestrator | 00:01:24.719 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.719391 | orchestrator | 00:01:24.719 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.719420 | orchestrator | 00:01:24.719 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.719427 | orchestrator | 00:01:24.719 STDOUT terraform:  } 2025-04-10 00:01:24.719481 | orchestrator | 00:01:24.719 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-10 00:01:24.719532 | orchestrator | 00:01:24.719 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-10 00:01:24.719558 | orchestrator | 00:01:24.719 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.719577 | orchestrator | 00:01:24.719 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.719609 | orchestrator | 00:01:24.719 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.719630 | orchestrator | 00:01:24.719 STDOUT terraform:  + protocol = "tcp" 2025-04-10 00:01:24.719661 | orchestrator | 00:01:24.719 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.719691 | orchestrator | 00:01:24.719 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.719715 | orchestrator | 00:01:24.719 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.719746 | orchestrator | 00:01:24.719 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.719775 | orchestrator | 00:01:24.719 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.719783 | orchestrator | 00:01:24.719 STDOUT terraform:  } 2025-04-10 00:01:24.719835 | orchestrator | 00:01:24.719 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-10 00:01:24.719889 | orchestrator | 00:01:24.719 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-10 00:01:24.719901 | orchestrator | 00:01:24.719 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.719927 | orchestrator | 00:01:24.719 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.719959 | orchestrator | 00:01:24.719 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.719979 | orchestrator | 00:01:24.719 STDOUT terraform:  + protocol = "udp" 2025-04-10 00:01:24.720010 | orchestrator | 00:01:24.719 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.720040 | orchestrator | 00:01:24.720 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.720064 | orchestrator | 00:01:24.720 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.720095 | orchestrator | 00:01:24.720 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.720125 | orchestrator | 00:01:24.720 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.720135 | orchestrator | 00:01:24.720 STDOUT terraform:  } 2025-04-10 00:01:24.720198 | orchestrator | 00:01:24.720 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-10 00:01:24.720250 | orchestrator | 00:01:24.720 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-10 00:01:24.720273 | orchestrator | 00:01:24.720 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.720294 | orchestrator | 00:01:24.720 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.720326 | orchestrator | 00:01:24.720 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.720337 | orchestrator | 00:01:24.720 STDOUT terraform:  + protocol = "icmp" 2025-04-10 00:01:24.720372 | orchestrator | 00:01:24.720 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.720402 | orchestrator | 00:01:24.720 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.720427 | orchestrator | 00:01:24.720 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.720456 | orchestrator | 00:01:24.720 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.720485 | orchestrator | 00:01:24.720 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.720500 | orchestrator | 00:01:24.720 STDOUT terraform:  } 2025-04-10 00:01:24.720579 | orchestrator | 00:01:24.720 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-10 00:01:24.720630 | orchestrator | 00:01:24.720 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-10 00:01:24.720651 | orchestrator | 00:01:24.720 STDOUT terraform:  + description = "vrrp" 2025-04-10 00:01:24.720676 | orchestrator | 00:01:24.720 STDOUT terraform:  + direction = "ingress" 2025-04-10 00:01:24.720698 | orchestrator | 00:01:24.720 STDOUT terraform:  + ethertype = "IPv4" 2025-04-10 00:01:24.720729 | orchestrator | 00:01:24.720 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.720749 | orchestrator | 00:01:24.720 STDOUT terraform:  + protocol = "112" 2025-04-10 00:01:24.720780 | orchestrator | 00:01:24.720 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.720816 | orchestrator | 00:01:24.720 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-10 00:01:24.720835 | orchestrator | 00:01:24.720 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-10 00:01:24.720865 | orchestrator | 00:01:24.720 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-10 00:01:24.720897 | orchestrator | 00:01:24.720 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.720904 | orchestrator | 00:01:24.720 STDOUT terraform:  } 2025-04-10 00:01:24.720971 | orchestrator | 00:01:24.720 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-10 00:01:24.721020 | orchestrator | 00:01:24.720 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-10 00:01:24.721049 | orchestrator | 00:01:24.721 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.721082 | orchestrator | 00:01:24.721 STDOUT terraform:  + description = "management security group" 2025-04-10 00:01:24.721111 | orchestrator | 00:01:24.721 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.721140 | orchestrator | 00:01:24.721 STDOUT terraform:  + name = "testbed-management" 2025-04-10 00:01:24.721183 | orchestrator | 00:01:24.721 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.721210 | orchestrator | 00:01:24.721 STDOUT terraform:  + stateful = (known after apply) 2025-04-10 00:01:24.721248 | orchestrator | 00:01:24.721 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.721255 | orchestrator | 00:01:24.721 STDOUT terraform:  } 2025-04-10 00:01:24.721305 | orchestrator | 00:01:24.721 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-10 00:01:24.721351 | orchestrator | 00:01:24.721 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-10 00:01:24.721379 | orchestrator | 00:01:24.721 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.721408 | orchestrator | 00:01:24.721 STDOUT terraform:  + description = "node security group" 2025-04-10 00:01:24.721437 | orchestrator | 00:01:24.721 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.721461 | orchestrator | 00:01:24.721 STDOUT terraform:  + name = "testbed-node" 2025-04-10 00:01:24.721489 | orchestrator | 00:01:24.721 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.721517 | orchestrator | 00:01:24.721 STDOUT terraform:  + stateful = (known after apply) 2025-04-10 00:01:24.721545 | orchestrator | 00:01:24.721 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.721552 | orchestrator | 00:01:24.721 STDOUT terraform:  } 2025-04-10 00:01:24.721599 | orchestrator | 00:01:24.721 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-10 00:01:24.721643 | orchestrator | 00:01:24.721 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-10 00:01:24.721673 | orchestrator | 00:01:24.721 STDOUT terraform:  + all_tags = (known after apply) 2025-04-10 00:01:24.721704 | orchestrator | 00:01:24.721 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-10 00:01:24.721724 | orchestrator | 00:01:24.721 STDOUT terraform:  + dns_nameservers = [ 2025-04-10 00:01:24.721741 | orchestrator | 00:01:24.721 STDOUT terraform:  + "8.8.8.8", 2025-04-10 00:01:24.721758 | orchestrator | 00:01:24.721 STDOUT terraform:  + "9.9.9.9", 2025-04-10 00:01:24.721765 | orchestrator | 00:01:24.721 STDOUT terraform:  ] 2025-04-10 00:01:24.721790 | orchestrator | 00:01:24.721 STDOUT terraform:  + enable_dhcp = true 2025-04-10 00:01:24.721820 | orchestrator | 00:01:24.721 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-10 00:01:24.721852 | orchestrator | 00:01:24.721 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.721873 | orchestrator | 00:01:24.721 STDOUT terraform:  + ip_version = 4 2025-04-10 00:01:24.721904 | orchestrator | 00:01:24.721 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-10 00:01:24.721934 | orchestrator | 00:01:24.721 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-10 00:01:24.721973 | orchestrator | 00:01:24.721 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-10 00:01:24.722004 | orchestrator | 00:01:24.721 STDOUT terraform:  + network_id = (known after apply) 2025-04-10 00:01:24.722051 | orchestrator | 00:01:24.721 STDOUT terraform:  + no_gateway = false 2025-04-10 00:01:24.722070 | orchestrator | 00:01:24.722 STDOUT terraform:  + region = (known after apply) 2025-04-10 00:01:24.722102 | orchestrator | 00:01:24.722 STDOUT terraform:  + service_types = (known after apply) 2025-04-10 00:01:24.722131 | orchestrator | 00:01:24.722 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-10 00:01:24.722187 | orchestrator | 00:01:24.722 STDOUT terraform:  + allocation_pool { 2025-04-10 00:01:24.722196 | orchestrator | 00:01:24.722 STDOUT terraform:  + end = "192.168.31.250" 2025-04-10 00:01:24.722217 | orchestrator | 00:01:24.722 STDOUT terraform:  + start = "192.168.31.200" 2025-04-10 00:01:24.722224 | orchestrator | 00:01:24.722 STDOUT terraform:  } 2025-04-10 00:01:24.722240 | orchestrator | 00:01:24.722 STDOUT terraform:  } 2025-04-10 00:01:24.722265 | orchestrator | 00:01:24.722 STDOUT terraform:  # terraform_data.image will be created 2025-04-10 00:01:24.722292 | orchestrator | 00:01:24.722 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-10 00:01:24.722315 | orchestrator | 00:01:24.722 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.722335 | orchestrator | 00:01:24.722 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-10 00:01:24.722355 | orchestrator | 00:01:24.722 STDOUT terraform:  + output = (known after apply) 2025-04-10 00:01:24.722367 | orchestrator | 00:01:24.722 STDOUT terraform:  } 2025-04-10 00:01:24.722392 | orchestrator | 00:01:24.722 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-10 00:01:24.722420 | orchestrator | 00:01:24.722 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-10 00:01:24.722445 | orchestrator | 00:01:24.722 STDOUT terraform:  + id = (known after apply) 2025-04-10 00:01:24.722466 | orchestrator | 00:01:24.722 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-10 00:01:24.722491 | orchestrator | 00:01:24.722 STDOUT terraform:  + output = (known after apply) 2025-04-10 00:01:24.722498 | orchestrator | 00:01:24.722 STDOUT terraform:  } 2025-04-10 00:01:24.722529 | orchestrator | 00:01:24.722 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-10 00:01:24.722544 | orchestrator | 00:01:24.722 STDOUT terraform: Changes to Outputs: 2025-04-10 00:01:24.722569 | orchestrator | 00:01:24.722 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-10 00:01:24.722593 | orchestrator | 00:01:24.722 STDOUT terraform:  + private_key = (sensitive value) 2025-04-10 00:01:24.944287 | orchestrator | 00:01:24.943 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-10 00:01:24.944991 | orchestrator | 00:01:24.943 STDOUT terraform: terraform_data.image: Creating... 2025-04-10 00:01:24.945084 | orchestrator | 00:01:24.943 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=d8240f61-65d6-5544-c0fa-b075a6484ed0] 2025-04-10 00:01:24.945115 | orchestrator | 00:01:24.944 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=61d072eb-b096-a75f-3c9c-f498d763e86a] 2025-04-10 00:01:24.956136 | orchestrator | 00:01:24.955 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-10 00:01:24.969860 | orchestrator | 00:01:24.969 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-10 00:01:24.973548 | orchestrator | 00:01:24.973 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-10 00:01:24.973597 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-10 00:01:24.973640 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-10 00:01:24.973651 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-10 00:01:24.973685 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-10 00:01:24.973728 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-10 00:01:24.973761 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-10 00:01:24.973806 | orchestrator | 00:01:24.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-10 00:01:25.403865 | orchestrator | 00:01:25.403 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-10 00:01:25.409699 | orchestrator | 00:01:25.409 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-10 00:01:25.410631 | orchestrator | 00:01:25.410 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-10 00:01:25.416599 | orchestrator | 00:01:25.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-10 00:01:25.585706 | orchestrator | 00:01:25.585 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-04-10 00:01:25.593484 | orchestrator | 00:01:25.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-10 00:01:30.809615 | orchestrator | 00:01:30.809 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=202fa545-f629-47d7-9de1-4b46dff81193] 2025-04-10 00:01:30.816911 | orchestrator | 00:01:30.816 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-10 00:01:34.972071 | orchestrator | 00:01:34.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-10 00:01:34.973040 | orchestrator | 00:01:34.972 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-10 00:01:34.973120 | orchestrator | 00:01:34.972 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-10 00:01:34.973389 | orchestrator | 00:01:34.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-10 00:01:34.973465 | orchestrator | 00:01:34.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-10 00:01:34.974280 | orchestrator | 00:01:34.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-10 00:01:35.412096 | orchestrator | 00:01:35.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-10 00:01:35.417178 | orchestrator | 00:01:35.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-10 00:01:35.568682 | orchestrator | 00:01:35.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=864e33c6-b4c3-48eb-91b8-2629744c3ba6] 2025-04-10 00:01:35.574008 | orchestrator | 00:01:35.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-10 00:01:35.582338 | orchestrator | 00:01:35.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=c760fe92-14ba-404a-b6f7-3b1432fac79b] 2025-04-10 00:01:35.582655 | orchestrator | 00:01:35.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=8309ccf2-021f-4ba0-8871-1baa1ae2c644] 2025-04-10 00:01:35.586805 | orchestrator | 00:01:35.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-10 00:01:35.587608 | orchestrator | 00:01:35.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-10 00:01:35.593923 | orchestrator | 00:01:35.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-10 00:01:35.599019 | orchestrator | 00:01:35.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=d97216ad-03db-4dc0-9fce-19fb462ce1e2] 2025-04-10 00:01:35.603354 | orchestrator | 00:01:35.603 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-10 00:01:35.608662 | orchestrator | 00:01:35.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=5bfdc91a-8c25-4ce3-95e3-852e7229c9f1] 2025-04-10 00:01:35.615802 | orchestrator | 00:01:35.615 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-10 00:01:35.632085 | orchestrator | 00:01:35.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=fa805255-2b65-45ba-aa52-d97cf6f3e06a] 2025-04-10 00:01:35.637771 | orchestrator | 00:01:35.637 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-10 00:01:35.663596 | orchestrator | 00:01:35.663 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=221f8640-be1f-4702-ab57-197a8a373172] 2025-04-10 00:01:35.670495 | orchestrator | 00:01:35.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-10 00:01:35.672560 | orchestrator | 00:01:35.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=e188828f-11b5-49b7-aa2c-198471f41cb7] 2025-04-10 00:01:35.681716 | orchestrator | 00:01:35.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-10 00:01:35.781582 | orchestrator | 00:01:35.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=57ed073f-7848-4dd1-911d-b06790e5cae3] 2025-04-10 00:01:35.796388 | orchestrator | 00:01:35.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-10 00:01:40.819803 | orchestrator | 00:01:40.819 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-10 00:01:40.983893 | orchestrator | 00:01:40.983 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=2fcb4c67-862b-4727-95c3-98e3283b8fb6] 2025-04-10 00:01:40.991812 | orchestrator | 00:01:40.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-10 00:01:45.574631 | orchestrator | 00:01:45.574 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-10 00:01:45.587997 | orchestrator | 00:01:45.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-10 00:01:45.589021 | orchestrator | 00:01:45.588 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-10 00:01:45.604542 | orchestrator | 00:01:45.604 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-10 00:01:45.616764 | orchestrator | 00:01:45.616 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-10 00:01:45.639084 | orchestrator | 00:01:45.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-10 00:01:45.671638 | orchestrator | 00:01:45.671 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-10 00:01:45.682647 | orchestrator | 00:01:45.682 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-10 00:01:45.768790 | orchestrator | 00:01:45.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=89a2b2fc-2fff-49a5-ab5e-089af5d983aa] 2025-04-10 00:01:45.776379 | orchestrator | 00:01:45.775 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=6c91147a-8481-48ce-bf49-6c79ed393785] 2025-04-10 00:01:45.781282 | orchestrator | 00:01:45.780 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-10 00:01:45.785013 | orchestrator | 00:01:45.784 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-10 00:01:45.795520 | orchestrator | 00:01:45.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1] 2025-04-10 00:01:45.797145 | orchestrator | 00:01:45.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-10 00:01:45.800781 | orchestrator | 00:01:45.800 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-10 00:01:45.823222 | orchestrator | 00:01:45.822 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=53cc8dbb-b824-45fa-a2cb-804fcc96761d] 2025-04-10 00:01:45.831714 | orchestrator | 00:01:45.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-10 00:01:45.856800 | orchestrator | 00:01:45.856 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=4f117f5c-a676-4195-9d53-4eb16ef4d9e2] 2025-04-10 00:01:45.864561 | orchestrator | 00:01:45.864 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-10 00:01:45.877086 | orchestrator | 00:01:45.876 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=737be83d-5ee5-4854-9988-400b2ee7e7c1] 2025-04-10 00:01:45.879144 | orchestrator | 00:01:45.878 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=3b5996d2-64c6-4dbd-ad82-ae9f8c5fd05f] 2025-04-10 00:01:45.890676 | orchestrator | 00:01:45.890 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-10 00:01:45.891530 | orchestrator | 00:01:45.891 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-10 00:01:45.895640 | orchestrator | 00:01:45.895 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=5b23e9f7122a1c7989493ec54a10fa8489d29142] 2025-04-10 00:01:45.896728 | orchestrator | 00:01:45.896 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=1d279cb640fc915dc2fbbd3c18859b3705884e0a] 2025-04-10 00:01:45.902381 | orchestrator | 00:01:45.902 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-10 00:01:45.906883 | orchestrator | 00:01:45.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=b0ed1186-9beb-4d4b-adab-3343747bf238] 2025-04-10 00:01:46.130575 | orchestrator | 00:01:46.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=28ff6eda-e1e7-4701-8f57-9f1d22e0371b] 2025-04-10 00:01:50.992619 | orchestrator | 00:01:50.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-10 00:01:51.323078 | orchestrator | 00:01:51.322 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=1d967eed-d41f-4ed0-858d-bb16f205f817] 2025-04-10 00:01:51.586297 | orchestrator | 00:01:51.585 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=722e8e14-92dd-44ba-af61-c4b59758ac81] 2025-04-10 00:01:51.594331 | orchestrator | 00:01:51.594 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-10 00:01:55.782522 | orchestrator | 00:01:55.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-10 00:01:55.786658 | orchestrator | 00:01:55.786 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-10 00:01:55.802409 | orchestrator | 00:01:55.802 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-10 00:01:55.832831 | orchestrator | 00:01:55.832 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-10 00:01:55.866613 | orchestrator | 00:01:55.866 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-10 00:01:56.162797 | orchestrator | 00:01:56.162 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=4ab85bb6-830c-4bb2-a981-885b30070cf3] 2025-04-10 00:01:56.180887 | orchestrator | 00:01:56.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=f3113d67-a712-4d61-8002-b363d5a12e6a] 2025-04-10 00:01:56.213002 | orchestrator | 00:01:56.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=b8a544e2-f8fb-4bb9-a080-c9f48e09edc5] 2025-04-10 00:01:56.222625 | orchestrator | 00:01:56.222 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=5a1799d3-5dff-4635-a47a-02ec0b10ee7e] 2025-04-10 00:01:56.225125 | orchestrator | 00:01:56.224 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=1651f83f-0ee2-4e26-b483-3c086e6d5fb5] 2025-04-10 00:01:59.048263 | orchestrator | 00:01:59.047 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=3d0059a4-cf08-44c7-a4f9-db8827690277] 2025-04-10 00:01:59.052616 | orchestrator | 00:01:59.052 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-10 00:01:59.054591 | orchestrator | 00:01:59.054 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-10 00:01:59.054711 | orchestrator | 00:01:59.054 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-10 00:01:59.156488 | orchestrator | 00:01:59.156 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=673ac191-5821-495e-857f-1c611b8dadba] 2025-04-10 00:01:59.171698 | orchestrator | 00:01:59.171 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-10 00:01:59.172531 | orchestrator | 00:01:59.172 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-10 00:01:59.173268 | orchestrator | 00:01:59.173 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-10 00:01:59.178446 | orchestrator | 00:01:59.178 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-10 00:01:59.179062 | orchestrator | 00:01:59.178 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-10 00:01:59.180330 | orchestrator | 00:01:59.180 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-10 00:01:59.181734 | orchestrator | 00:01:59.181 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-10 00:01:59.184135 | orchestrator | 00:01:59.184 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-10 00:01:59.220206 | orchestrator | 00:01:59.219 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=8df580d3-ca57-4e51-9d4b-78147a828106] 2025-04-10 00:01:59.228920 | orchestrator | 00:01:59.228 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-10 00:01:59.290505 | orchestrator | 00:01:59.290 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=2e386d73-39ef-4043-b0dc-b5aea8b14306] 2025-04-10 00:01:59.305559 | orchestrator | 00:01:59.305 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-10 00:01:59.409246 | orchestrator | 00:01:59.408 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=6fa1529f-0137-4c23-81b4-70a8ad9b5ec1] 2025-04-10 00:01:59.415916 | orchestrator | 00:01:59.415 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-10 00:01:59.521284 | orchestrator | 00:01:59.520 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=63f3f183-d4aa-4cfe-a941-1f8c93904b73] 2025-04-10 00:01:59.526105 | orchestrator | 00:01:59.525 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-10 00:01:59.562726 | orchestrator | 00:01:59.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=3e692ba8-2bd0-4bc1-8cd8-75861a1f2064] 2025-04-10 00:01:59.570874 | orchestrator | 00:01:59.570 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-10 00:01:59.649144 | orchestrator | 00:01:59.648 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9f71430b-f285-4f23-abae-ed367bea2e6e] 2025-04-10 00:01:59.656716 | orchestrator | 00:01:59.656 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-10 00:01:59.671918 | orchestrator | 00:01:59.671 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d17f0dbc-3077-4163-9e25-1f2755e845c2] 2025-04-10 00:01:59.679517 | orchestrator | 00:01:59.679 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-10 00:01:59.771408 | orchestrator | 00:01:59.770 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=6da85fc8-7e73-4ddb-9a70-42e7be65cedf] 2025-04-10 00:01:59.782758 | orchestrator | 00:01:59.782 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-10 00:01:59.894477 | orchestrator | 00:01:59.893 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=e6399e58-ce4d-48fd-a774-20ef9ef69fea] 2025-04-10 00:02:00.029032 | orchestrator | 00:02:00.028 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=069382c9-39d2-4fba-ae68-605c8648305a] 2025-04-10 00:02:04.885369 | orchestrator | 00:02:04.884 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=a27f62e8-ea26-4216-a857-5cb79f06e033] 2025-04-10 00:02:04.890133 | orchestrator | 00:02:04.889 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=f1804656-7ed6-4b10-a60c-40dcc8d195d2] 2025-04-10 00:02:05.025537 | orchestrator | 00:02:05.025 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=5d7d56d5-f3c6-4ebf-8d8f-b6f588bffa0f] 2025-04-10 00:02:05.030062 | orchestrator | 00:02:05.029 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=cf5aa8a8-8732-4e78-89e4-b04b572c5f93] 2025-04-10 00:02:05.230320 | orchestrator | 00:02:05.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=37e6858d-0b12-45b6-a737-d195ed607fff] 2025-04-10 00:02:05.327485 | orchestrator | 00:02:05.327 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=046cd395-8908-4cb1-b76c-c581d3ba991a] 2025-04-10 00:02:05.480841 | orchestrator | 00:02:05.480 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=086380e6-4501-4281-83cb-6fafb5322da3] 2025-04-10 00:02:06.168758 | orchestrator | 00:02:06.168 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=71cab94b-a848-42f5-b7ee-e8c5b6bb876e] 2025-04-10 00:02:06.191273 | orchestrator | 00:02:06.191 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-10 00:02:06.206303 | orchestrator | 00:02:06.206 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-10 00:02:06.212354 | orchestrator | 00:02:06.210 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-10 00:02:06.218652 | orchestrator | 00:02:06.211 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-10 00:02:06.218721 | orchestrator | 00:02:06.218 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-10 00:02:06.220246 | orchestrator | 00:02:06.220 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-10 00:02:06.222603 | orchestrator | 00:02:06.222 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-10 00:02:13.453918 | orchestrator | 00:02:13.453 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=19e5dfe7-dcbf-4ab6-b157-906fc961470c] 2025-04-10 00:02:13.465333 | orchestrator | 00:02:13.464 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-10 00:02:13.471554 | orchestrator | 00:02:13.471 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-10 00:02:13.471691 | orchestrator | 00:02:13.471 STDOUT terraform: local_file.inventory: Creating... 2025-04-10 00:02:13.478094 | orchestrator | 00:02:13.477 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ef00d2c725819b18f4149af74210988f03d39afd] 2025-04-10 00:02:13.481997 | orchestrator | 00:02:13.481 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=e7d8c8e8b09598dcda5a310334a096c4f9d5caeb] 2025-04-10 00:02:14.045987 | orchestrator | 00:02:14.045 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=19e5dfe7-dcbf-4ab6-b157-906fc961470c] 2025-04-10 00:02:16.208300 | orchestrator | 00:02:16.207 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-10 00:02:16.213467 | orchestrator | 00:02:16.213 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-10 00:02:16.221681 | orchestrator | 00:02:16.221 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-10 00:02:16.223086 | orchestrator | 00:02:16.222 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-10 00:02:16.223329 | orchestrator | 00:02:16.222 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-10 00:02:16.227239 | orchestrator | 00:02:16.226 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-10 00:02:26.209518 | orchestrator | 00:02:26.209 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-10 00:02:26.213612 | orchestrator | 00:02:26.213 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-10 00:02:26.221896 | orchestrator | 00:02:26.221 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-10 00:02:26.223031 | orchestrator | 00:02:26.222 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-10 00:02:26.223116 | orchestrator | 00:02:26.222 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-10 00:02:26.227124 | orchestrator | 00:02:26.226 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-10 00:02:26.640734 | orchestrator | 00:02:26.640 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=fc55dcef-ed0f-4715-97c0-f4c5a7c784fc] 2025-04-10 00:02:26.734464 | orchestrator | 00:02:26.734 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=fdee2d73-7372-4090-a4dd-3c7bc0a81367] 2025-04-10 00:02:27.651506 | orchestrator | 00:02:27.651 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 22s [id=caeed7ca-9e68-444c-9e99-2eadeef4239d] 2025-04-10 00:02:27.671875 | orchestrator | 00:02:27.671 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 22s [id=1a1c4014-cb2e-467f-bb5b-05dc8ffe64a2] 2025-04-10 00:02:36.223896 | orchestrator | 00:02:36.223 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-04-10 00:02:36.979293 | orchestrator | 00:02:36.223 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-04-10 00:02:36.979390 | orchestrator | 00:02:36.979 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=854b0a13-3fd6-4b5a-a3aa-1f8ba012d2e6] 2025-04-10 00:02:36.981491 | orchestrator | 00:02:36.981 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=4777c270-2392-4e66-b809-9accfc3a27e9] 2025-04-10 00:02:36.995872 | orchestrator | 00:02:36.995 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-10 00:02:36.999953 | orchestrator | 00:02:36.999 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6656311728834535214] 2025-04-10 00:02:37.005702 | orchestrator | 00:02:37.005 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-10 00:02:37.006444 | orchestrator | 00:02:37.006 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-10 00:02:37.014087 | orchestrator | 00:02:37.013 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-10 00:02:37.014576 | orchestrator | 00:02:37.014 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-10 00:02:37.027016 | orchestrator | 00:02:37.026 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-10 00:02:37.028896 | orchestrator | 00:02:37.028 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-10 00:02:37.030583 | orchestrator | 00:02:37.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-10 00:02:37.032317 | orchestrator | 00:02:37.032 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-10 00:02:37.034159 | orchestrator | 00:02:37.034 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-10 00:02:37.037604 | orchestrator | 00:02:37.037 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-10 00:02:42.362746 | orchestrator | 00:02:42.362 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=fdee2d73-7372-4090-a4dd-3c7bc0a81367/53cc8dbb-b824-45fa-a2cb-804fcc96761d] 2025-04-10 00:02:42.365770 | orchestrator | 00:02:42.365 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=4777c270-2392-4e66-b809-9accfc3a27e9/89a2b2fc-2fff-49a5-ab5e-089af5d983aa] 2025-04-10 00:02:42.377097 | orchestrator | 00:02:42.376 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-10 00:02:42.380881 | orchestrator | 00:02:42.380 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=caeed7ca-9e68-444c-9e99-2eadeef4239d/221f8640-be1f-4702-ab57-197a8a373172] 2025-04-10 00:02:42.384762 | orchestrator | 00:02:42.384 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-10 00:02:42.387829 | orchestrator | 00:02:42.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-10 00:02:42.391468 | orchestrator | 00:02:42.391 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=854b0a13-3fd6-4b5a-a3aa-1f8ba012d2e6/fa805255-2b65-45ba-aa52-d97cf6f3e06a] 2025-04-10 00:02:42.399872 | orchestrator | 00:02:42.399 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=fc55dcef-ed0f-4715-97c0-f4c5a7c784fc/4f117f5c-a676-4195-9d53-4eb16ef4d9e2] 2025-04-10 00:02:42.405087 | orchestrator | 00:02:42.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=1a1c4014-cb2e-467f-bb5b-05dc8ffe64a2/6c91147a-8481-48ce-bf49-6c79ed393785] 2025-04-10 00:02:42.406951 | orchestrator | 00:02:42.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-10 00:02:42.410717 | orchestrator | 00:02:42.410 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=fdee2d73-7372-4090-a4dd-3c7bc0a81367/737be83d-5ee5-4854-9988-400b2ee7e7c1] 2025-04-10 00:02:42.411868 | orchestrator | 00:02:42.411 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=4777c270-2392-4e66-b809-9accfc3a27e9/5bfdc91a-8c25-4ce3-95e3-852e7229c9f1] 2025-04-10 00:02:42.413790 | orchestrator | 00:02:42.413 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-10 00:02:42.416064 | orchestrator | 00:02:42.415 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-10 00:02:42.420678 | orchestrator | 00:02:42.420 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=1a1c4014-cb2e-467f-bb5b-05dc8ffe64a2/3b5996d2-64c6-4dbd-ad82-ae9f8c5fd05f] 2025-04-10 00:02:42.421697 | orchestrator | 00:02:42.421 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-10 00:02:42.427226 | orchestrator | 00:02:42.427 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-10 00:02:42.432074 | orchestrator | 00:02:42.431 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-10 00:02:42.439570 | orchestrator | 00:02:42.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=fdee2d73-7372-4090-a4dd-3c7bc0a81367/2fcb4c67-862b-4727-95c3-98e3283b8fb6] 2025-04-10 00:02:47.720716 | orchestrator | 00:02:47.720 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=caeed7ca-9e68-444c-9e99-2eadeef4239d/8309ccf2-021f-4ba0-8871-1baa1ae2c644] 2025-04-10 00:02:47.732750 | orchestrator | 00:02:47.731 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=4777c270-2392-4e66-b809-9accfc3a27e9/c760fe92-14ba-404a-b6f7-3b1432fac79b] 2025-04-10 00:02:47.740651 | orchestrator | 00:02:47.740 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=1a1c4014-cb2e-467f-bb5b-05dc8ffe64a2/d97216ad-03db-4dc0-9fce-19fb462ce1e2] 2025-04-10 00:02:47.752094 | orchestrator | 00:02:47.751 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=caeed7ca-9e68-444c-9e99-2eadeef4239d/7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1] 2025-04-10 00:02:47.754857 | orchestrator | 00:02:47.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=fc55dcef-ed0f-4715-97c0-f4c5a7c784fc/57ed073f-7848-4dd1-911d-b06790e5cae3] 2025-04-10 00:02:47.763802 | orchestrator | 00:02:47.763 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=854b0a13-3fd6-4b5a-a3aa-1f8ba012d2e6/b0ed1186-9beb-4d4b-adab-3343747bf238] 2025-04-10 00:02:47.774980 | orchestrator | 00:02:47.774 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=fc55dcef-ed0f-4715-97c0-f4c5a7c784fc/e188828f-11b5-49b7-aa2c-198471f41cb7] 2025-04-10 00:02:47.784105 | orchestrator | 00:02:47.783 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=854b0a13-3fd6-4b5a-a3aa-1f8ba012d2e6/864e33c6-b4c3-48eb-91b8-2629744c3ba6] 2025-04-10 00:02:52.433012 | orchestrator | 00:02:52.432 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-10 00:03:02.437345 | orchestrator | 00:03:02.436 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-10 00:03:03.029485 | orchestrator | 00:03:03.029 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=e8d70cc3-9db6-4e91-af11-60fcfbf76884] 2025-04-10 00:03:03.052305 | orchestrator | 00:03:03.052 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-10 00:03:03.052384 | orchestrator | 00:03:03.052 STDOUT terraform: Outputs: 2025-04-10 00:03:03.052404 | orchestrator | 00:03:03.052 STDOUT terraform: manager_address = 2025-04-10 00:03:03.059530 | orchestrator | 00:03:03.052 STDOUT terraform: private_key = 2025-04-10 00:03:13.213451 | orchestrator | changed 2025-04-10 00:03:13.251995 | 2025-04-10 00:03:13.252114 | TASK [Fetch manager address] 2025-04-10 00:03:13.681505 | orchestrator | ok 2025-04-10 00:03:13.694391 | 2025-04-10 00:03:13.694519 | TASK [Set manager_host address] 2025-04-10 00:03:13.807177 | orchestrator | ok 2025-04-10 00:03:13.818854 | 2025-04-10 00:03:13.818978 | LOOP [Update ansible collections] 2025-04-10 00:03:14.579022 | orchestrator | changed 2025-04-10 00:03:15.305215 | orchestrator | changed 2025-04-10 00:03:15.328600 | 2025-04-10 00:03:15.328749 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-10 00:03:25.868617 | orchestrator | ok 2025-04-10 00:03:25.882503 | 2025-04-10 00:03:25.882614 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-10 00:04:25.930022 | orchestrator | ok 2025-04-10 00:04:25.941008 | 2025-04-10 00:04:25.941117 | TASK [Fetch manager ssh hostkey] 2025-04-10 00:04:27.016988 | orchestrator | Output suppressed because no_log was given 2025-04-10 00:04:27.034683 | 2025-04-10 00:04:27.034817 | TASK [Get ssh keypair from terraform environment] 2025-04-10 00:04:27.581252 | orchestrator | changed 2025-04-10 00:04:27.601757 | 2025-04-10 00:04:27.601894 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-10 00:04:27.637969 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-10 00:04:27.647148 | 2025-04-10 00:04:27.647252 | TASK [Run manager part 0] 2025-04-10 00:04:28.593138 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-10 00:04:28.635092 | orchestrator | 2025-04-10 00:04:30.668480 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-10 00:04:30.668545 | orchestrator | 2025-04-10 00:04:30.668562 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-10 00:04:30.668578 | orchestrator | ok: [testbed-manager] 2025-04-10 00:04:32.562205 | orchestrator | 2025-04-10 00:04:32.562301 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-10 00:04:32.562317 | orchestrator | 2025-04-10 00:04:32.562324 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:04:32.562338 | orchestrator | ok: [testbed-manager] 2025-04-10 00:04:33.195556 | orchestrator | 2025-04-10 00:04:33.195622 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-10 00:04:33.195639 | orchestrator | ok: [testbed-manager] 2025-04-10 00:04:33.246119 | orchestrator | 2025-04-10 00:04:33.246166 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-10 00:04:33.246181 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:33.277014 | orchestrator | 2025-04-10 00:04:33.277046 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-10 00:04:33.277059 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:33.302602 | orchestrator | 2025-04-10 00:04:33.302628 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-10 00:04:33.302639 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:33.330008 | orchestrator | 2025-04-10 00:04:33.330049 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-10 00:04:33.330061 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:33.355457 | orchestrator | 2025-04-10 00:04:33.355474 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-10 00:04:33.355484 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:33.386068 | orchestrator | 2025-04-10 00:04:33.386088 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-10 00:04:33.386098 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:33.409835 | orchestrator | 2025-04-10 00:04:33.409872 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-10 00:04:33.409885 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:04:34.312214 | orchestrator | 2025-04-10 00:04:34.312346 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-10 00:04:34.312391 | orchestrator | changed: [testbed-manager] 2025-04-10 00:07:36.089947 | orchestrator | 2025-04-10 00:07:36.089999 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-10 00:07:36.090057 | orchestrator | changed: [testbed-manager] 2025-04-10 00:08:52.805833 | orchestrator | 2025-04-10 00:08:52.805931 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-10 00:08:52.805960 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:16.122851 | orchestrator | 2025-04-10 00:09:16.122975 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-10 00:09:16.123014 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:25.435323 | orchestrator | 2025-04-10 00:09:25.435522 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-10 00:09:25.435557 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:25.475706 | orchestrator | 2025-04-10 00:09:25.475772 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-10 00:09:25.475797 | orchestrator | ok: [testbed-manager] 2025-04-10 00:09:26.271380 | orchestrator | 2025-04-10 00:09:26.271456 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-10 00:09:26.271475 | orchestrator | ok: [testbed-manager] 2025-04-10 00:09:27.005752 | orchestrator | 2025-04-10 00:09:27.005853 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-10 00:09:27.005891 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:33.836204 | orchestrator | 2025-04-10 00:09:33.836328 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-10 00:09:33.836367 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:40.360652 | orchestrator | 2025-04-10 00:09:40.360726 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-10 00:09:40.360756 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:43.194422 | orchestrator | 2025-04-10 00:09:43.194465 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-10 00:09:43.194478 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:45.072301 | orchestrator | 2025-04-10 00:09:45.072354 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-10 00:09:45.072372 | orchestrator | changed: [testbed-manager] 2025-04-10 00:09:46.220804 | orchestrator | 2025-04-10 00:09:46.220882 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-10 00:09:46.220903 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-10 00:09:46.259971 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-10 00:09:46.260069 | orchestrator | 2025-04-10 00:09:46.260091 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-10 00:09:46.260118 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-10 00:09:51.454989 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-10 00:09:51.455089 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-10 00:09:51.455104 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-10 00:09:51.455128 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-10 00:09:52.090141 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-10 00:09:52.090253 | orchestrator | 2025-04-10 00:09:52.090275 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-10 00:09:52.090306 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:12.294001 | orchestrator | 2025-04-10 00:10:12.294145 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-10 00:10:12.294180 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-10 00:10:14.822949 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-10 00:10:14.822991 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-10 00:10:14.822998 | orchestrator | 2025-04-10 00:10:14.823005 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-10 00:10:14.823019 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-10 00:10:16.265496 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-10 00:10:16.265543 | orchestrator | 2025-04-10 00:10:16.265551 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-10 00:10:16.265558 | orchestrator | 2025-04-10 00:10:16.265565 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:10:16.265578 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:16.314260 | orchestrator | 2025-04-10 00:10:16.314313 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-10 00:10:16.314332 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:16.380281 | orchestrator | 2025-04-10 00:10:16.380334 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-10 00:10:16.380353 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:17.151497 | orchestrator | 2025-04-10 00:10:17.151579 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-10 00:10:17.151603 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:17.862984 | orchestrator | 2025-04-10 00:10:17.863085 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-10 00:10:17.863119 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:19.319945 | orchestrator | 2025-04-10 00:10:19.320050 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-10 00:10:19.320085 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-10 00:10:20.730983 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-10 00:10:20.731088 | orchestrator | 2025-04-10 00:10:20.731109 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-10 00:10:20.731140 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:22.545913 | orchestrator | 2025-04-10 00:10:22.545963 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-10 00:10:22.545982 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:10:23.152329 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-10 00:10:23.419606 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:10:23.419689 | orchestrator | 2025-04-10 00:10:23.419708 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-10 00:10:23.419740 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:24.150407 | orchestrator | 2025-04-10 00:10:24.150481 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-10 00:10:24.150490 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:24.150497 | orchestrator | 2025-04-10 00:10:24.150502 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-10 00:10:24.150515 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:10:24.183710 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:24.183815 | orchestrator | 2025-04-10 00:10:24.183836 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-10 00:10:24.183867 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:24.218984 | orchestrator | 2025-04-10 00:10:24.219063 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-10 00:10:24.219092 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:24.250513 | orchestrator | 2025-04-10 00:10:24.250553 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-10 00:10:24.250574 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:24.298615 | orchestrator | 2025-04-10 00:10:24.298721 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-10 00:10:24.298754 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:25.078348 | orchestrator | 2025-04-10 00:10:25.078505 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-10 00:10:25.078543 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:26.454389 | orchestrator | 2025-04-10 00:10:26.454523 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-10 00:10:26.454544 | orchestrator | 2025-04-10 00:10:26.454559 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:10:26.454588 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:27.446712 | orchestrator | 2025-04-10 00:10:27.446754 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-10 00:10:27.446768 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:27.550147 | orchestrator | 2025-04-10 00:10:27.550220 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:10:27.550228 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-10 00:10:27.550234 | orchestrator | 2025-04-10 00:10:27.927445 | orchestrator | changed 2025-04-10 00:10:27.948263 | 2025-04-10 00:10:27.948429 | TASK [Point out that the log in on the manager is now possible] 2025-04-10 00:10:27.996422 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-10 00:10:28.007507 | 2025-04-10 00:10:28.007620 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-10 00:10:28.058633 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-10 00:10:28.069608 | 2025-04-10 00:10:28.069720 | TASK [Run manager part 1 + 2] 2025-04-10 00:10:28.888204 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-10 00:10:28.940936 | orchestrator | 2025-04-10 00:10:31.434622 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-10 00:10:31.434688 | orchestrator | 2025-04-10 00:10:31.434703 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:10:31.434722 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:31.473716 | orchestrator | 2025-04-10 00:10:31.473779 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-10 00:10:31.473800 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:31.514236 | orchestrator | 2025-04-10 00:10:31.514294 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-10 00:10:31.514313 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:31.549338 | orchestrator | 2025-04-10 00:10:31.549412 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-10 00:10:31.549431 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:31.611708 | orchestrator | 2025-04-10 00:10:31.611768 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-10 00:10:31.611787 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:31.668005 | orchestrator | 2025-04-10 00:10:31.668055 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-10 00:10:31.668073 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:31.715785 | orchestrator | 2025-04-10 00:10:31.715828 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-10 00:10:31.715842 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-10 00:10:32.425994 | orchestrator | 2025-04-10 00:10:32.426061 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-10 00:10:32.426080 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:32.471053 | orchestrator | 2025-04-10 00:10:32.471099 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-10 00:10:32.471116 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:33.816289 | orchestrator | 2025-04-10 00:10:33.816346 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-10 00:10:33.816371 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:34.379079 | orchestrator | 2025-04-10 00:10:34.379128 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-10 00:10:34.379146 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:35.539197 | orchestrator | 2025-04-10 00:10:35.539263 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-10 00:10:35.539292 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:48.601632 | orchestrator | 2025-04-10 00:10:48.601828 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-10 00:10:48.601867 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:49.276178 | orchestrator | 2025-04-10 00:10:49.276296 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-10 00:10:49.276329 | orchestrator | ok: [testbed-manager] 2025-04-10 00:10:49.332791 | orchestrator | 2025-04-10 00:10:49.332881 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-10 00:10:49.332904 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:10:50.334479 | orchestrator | 2025-04-10 00:10:50.334578 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-10 00:10:50.334602 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:51.332210 | orchestrator | 2025-04-10 00:10:51.332316 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-10 00:10:51.332347 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:51.926747 | orchestrator | 2025-04-10 00:10:51.926851 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-10 00:10:51.926886 | orchestrator | changed: [testbed-manager] 2025-04-10 00:10:51.965071 | orchestrator | 2025-04-10 00:10:51.965136 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-10 00:10:51.965153 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-10 00:10:55.443779 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-10 00:10:55.443898 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-10 00:10:55.443928 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-10 00:10:55.443971 | orchestrator | changed: [testbed-manager] 2025-04-10 00:11:05.276885 | orchestrator | 2025-04-10 00:11:05.277008 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-10 00:11:05.277053 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-10 00:11:06.401463 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-10 00:11:06.401558 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-10 00:11:06.401577 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-10 00:11:06.401593 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-10 00:11:06.401607 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-10 00:11:06.401622 | orchestrator | 2025-04-10 00:11:06.401637 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-10 00:11:06.401679 | orchestrator | changed: [testbed-manager] 2025-04-10 00:11:06.443312 | orchestrator | 2025-04-10 00:11:06.443434 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-10 00:11:06.443471 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:11:09.303241 | orchestrator | 2025-04-10 00:11:09.303355 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-10 00:11:09.303420 | orchestrator | changed: [testbed-manager] 2025-04-10 00:11:09.344909 | orchestrator | 2025-04-10 00:11:09.344994 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-10 00:11:09.345026 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:12:53.286861 | orchestrator | 2025-04-10 00:12:53.287023 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-10 00:12:53.287067 | orchestrator | changed: [testbed-manager] 2025-04-10 00:12:54.482178 | orchestrator | 2025-04-10 00:12:54.482283 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-10 00:12:54.482317 | orchestrator | ok: [testbed-manager] 2025-04-10 00:12:54.576207 | orchestrator | 2025-04-10 00:12:54.576310 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:12:54.576356 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-10 00:12:54.576374 | orchestrator | 2025-04-10 00:12:54.699388 | orchestrator | changed 2025-04-10 00:12:54.712914 | 2025-04-10 00:12:54.713025 | TASK [Reboot manager] 2025-04-10 00:12:56.254369 | orchestrator | changed 2025-04-10 00:12:56.273230 | 2025-04-10 00:12:56.273351 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-10 00:13:10.752380 | orchestrator | ok 2025-04-10 00:13:10.764377 | 2025-04-10 00:13:10.764511 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-10 00:14:10.817474 | orchestrator | ok 2025-04-10 00:14:10.829548 | 2025-04-10 00:14:10.829672 | TASK [Deploy manager + bootstrap nodes] 2025-04-10 00:14:15.283944 | orchestrator | 2025-04-10 00:14:15.286328 | orchestrator | # DEPLOY MANAGER 2025-04-10 00:14:15.286393 | orchestrator | 2025-04-10 00:14:15.286411 | orchestrator | + set -e 2025-04-10 00:14:15.286457 | orchestrator | + echo 2025-04-10 00:14:15.286476 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-10 00:14:15.286493 | orchestrator | + echo 2025-04-10 00:14:15.286518 | orchestrator | + cat /opt/manager-vars.sh 2025-04-10 00:14:15.286554 | orchestrator | export NUMBER_OF_NODES=6 2025-04-10 00:14:15.287416 | orchestrator | 2025-04-10 00:14:15.287439 | orchestrator | export CEPH_VERSION=quincy 2025-04-10 00:14:15.287454 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-10 00:14:15.287469 | orchestrator | export MANAGER_VERSION=8.1.0 2025-04-10 00:14:15.287483 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-10 00:14:15.287497 | orchestrator | 2025-04-10 00:14:15.287512 | orchestrator | export ARA=false 2025-04-10 00:14:15.287526 | orchestrator | export TEMPEST=false 2025-04-10 00:14:15.287541 | orchestrator | export IS_ZUUL=true 2025-04-10 00:14:15.287555 | orchestrator | 2025-04-10 00:14:15.287569 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:14:15.287584 | orchestrator | export EXTERNAL_API=false 2025-04-10 00:14:15.287598 | orchestrator | 2025-04-10 00:14:15.287612 | orchestrator | export IMAGE_USER=ubuntu 2025-04-10 00:14:15.287626 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-10 00:14:15.287640 | orchestrator | 2025-04-10 00:14:15.287654 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-10 00:14:15.287668 | orchestrator | 2025-04-10 00:14:15.287682 | orchestrator | + echo 2025-04-10 00:14:15.287696 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-10 00:14:15.287715 | orchestrator | ++ export INTERACTIVE=false 2025-04-10 00:14:15.287984 | orchestrator | ++ INTERACTIVE=false 2025-04-10 00:14:15.288024 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-10 00:14:15.288062 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-10 00:14:15.288089 | orchestrator | + source /opt/manager-vars.sh 2025-04-10 00:14:15.288114 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-10 00:14:15.288138 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-10 00:14:15.288164 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-10 00:14:15.288189 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-10 00:14:15.288215 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-10 00:14:15.288241 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-10 00:14:15.288275 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-10 00:14:15.288327 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-10 00:14:15.288342 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-10 00:14:15.288356 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-10 00:14:15.288370 | orchestrator | ++ export ARA=false 2025-04-10 00:14:15.288384 | orchestrator | ++ ARA=false 2025-04-10 00:14:15.288398 | orchestrator | ++ export TEMPEST=false 2025-04-10 00:14:15.288412 | orchestrator | ++ TEMPEST=false 2025-04-10 00:14:15.288426 | orchestrator | ++ export IS_ZUUL=true 2025-04-10 00:14:15.288440 | orchestrator | ++ IS_ZUUL=true 2025-04-10 00:14:15.288454 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:14:15.288469 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:14:15.288499 | orchestrator | ++ export EXTERNAL_API=false 2025-04-10 00:14:15.351114 | orchestrator | ++ EXTERNAL_API=false 2025-04-10 00:14:15.351218 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-10 00:14:15.351233 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-10 00:14:15.351248 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-10 00:14:15.351263 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-10 00:14:15.351323 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-10 00:14:15.351339 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-10 00:14:15.351355 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-10 00:14:15.351397 | orchestrator | + docker version 2025-04-10 00:14:15.656497 | orchestrator | Client: Docker Engine - Community 2025-04-10 00:14:15.658244 | orchestrator | Version: 26.1.4 2025-04-10 00:14:15.658277 | orchestrator | API version: 1.45 2025-04-10 00:14:15.658330 | orchestrator | Go version: go1.21.11 2025-04-10 00:14:15.658344 | orchestrator | Git commit: 5650f9b 2025-04-10 00:14:15.658359 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-10 00:14:15.658374 | orchestrator | OS/Arch: linux/amd64 2025-04-10 00:14:15.658388 | orchestrator | Context: default 2025-04-10 00:14:15.658402 | orchestrator | 2025-04-10 00:14:15.658416 | orchestrator | Server: Docker Engine - Community 2025-04-10 00:14:15.658430 | orchestrator | Engine: 2025-04-10 00:14:15.658444 | orchestrator | Version: 26.1.4 2025-04-10 00:14:15.658459 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-04-10 00:14:15.658472 | orchestrator | Go version: go1.21.11 2025-04-10 00:14:15.658488 | orchestrator | Git commit: de5c9cf 2025-04-10 00:14:15.658532 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-10 00:14:15.658546 | orchestrator | OS/Arch: linux/amd64 2025-04-10 00:14:15.658560 | orchestrator | Experimental: false 2025-04-10 00:14:15.658574 | orchestrator | containerd: 2025-04-10 00:14:15.658588 | orchestrator | Version: 1.7.27 2025-04-10 00:14:15.658602 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-10 00:14:15.658616 | orchestrator | runc: 2025-04-10 00:14:15.658630 | orchestrator | Version: 1.2.5 2025-04-10 00:14:15.658644 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-10 00:14:15.658658 | orchestrator | docker-init: 2025-04-10 00:14:15.658672 | orchestrator | Version: 0.19.0 2025-04-10 00:14:15.658686 | orchestrator | GitCommit: de40ad0 2025-04-10 00:14:15.658706 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-10 00:14:15.665165 | orchestrator | + set -e 2025-04-10 00:14:15.665362 | orchestrator | + source /opt/manager-vars.sh 2025-04-10 00:14:15.665397 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-10 00:14:15.665412 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-10 00:14:15.665425 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-10 00:14:15.665439 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-10 00:14:15.665453 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-10 00:14:15.665467 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-10 00:14:15.665481 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-10 00:14:15.665495 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-10 00:14:15.665509 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-10 00:14:15.665523 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-10 00:14:15.665537 | orchestrator | ++ export ARA=false 2025-04-10 00:14:15.665550 | orchestrator | ++ ARA=false 2025-04-10 00:14:15.665564 | orchestrator | ++ export TEMPEST=false 2025-04-10 00:14:15.665578 | orchestrator | ++ TEMPEST=false 2025-04-10 00:14:15.665592 | orchestrator | ++ export IS_ZUUL=true 2025-04-10 00:14:15.665605 | orchestrator | ++ IS_ZUUL=true 2025-04-10 00:14:15.665625 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:14:15.670582 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:14:15.670605 | orchestrator | ++ export EXTERNAL_API=false 2025-04-10 00:14:15.670620 | orchestrator | ++ EXTERNAL_API=false 2025-04-10 00:14:15.670634 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-10 00:14:15.670648 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-10 00:14:15.670662 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-10 00:14:15.670676 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-10 00:14:15.670689 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-10 00:14:15.670704 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-10 00:14:15.670718 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-10 00:14:15.670739 | orchestrator | ++ export INTERACTIVE=false 2025-04-10 00:14:15.670753 | orchestrator | ++ INTERACTIVE=false 2025-04-10 00:14:15.670767 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-10 00:14:15.670787 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-10 00:14:15.670801 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-10 00:14:15.670818 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-04-10 00:14:15.670837 | orchestrator | + set -e 2025-04-10 00:14:15.677565 | orchestrator | + VERSION=8.1.0 2025-04-10 00:14:15.677590 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-04-10 00:14:15.677615 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-10 00:14:15.681779 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-10 00:14:15.681812 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-10 00:14:15.685093 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-04-10 00:14:15.689925 | orchestrator | /opt/configuration ~ 2025-04-10 00:14:15.691646 | orchestrator | + set -e 2025-04-10 00:14:15.691668 | orchestrator | + pushd /opt/configuration 2025-04-10 00:14:15.691683 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-10 00:14:15.691701 | orchestrator | + source /opt/venv/bin/activate 2025-04-10 00:14:15.692436 | orchestrator | ++ deactivate nondestructive 2025-04-10 00:14:15.692488 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:15.692505 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:15.692525 | orchestrator | ++ hash -r 2025-04-10 00:14:15.692738 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:15.692861 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-10 00:14:15.692880 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-10 00:14:15.692896 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-10 00:14:15.692936 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-10 00:14:15.692951 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-10 00:14:15.692965 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-10 00:14:15.692982 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-10 00:14:15.692997 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-10 00:14:15.693030 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-10 00:14:17.033427 | orchestrator | ++ export PATH 2025-04-10 00:14:17.033548 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:17.033562 | orchestrator | ++ '[' -z '' ']' 2025-04-10 00:14:17.033571 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-10 00:14:17.033582 | orchestrator | ++ PS1='(venv) ' 2025-04-10 00:14:17.033592 | orchestrator | ++ export PS1 2025-04-10 00:14:17.033601 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-10 00:14:17.033611 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-10 00:14:17.033621 | orchestrator | ++ hash -r 2025-04-10 00:14:17.033632 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-04-10 00:14:17.033659 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-04-10 00:14:17.035425 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-04-10 00:14:17.036332 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-04-10 00:14:17.037661 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-04-10 00:14:17.039160 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-04-10 00:14:17.049644 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-04-10 00:14:17.051229 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-04-10 00:14:17.053401 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-04-10 00:14:17.053833 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-04-10 00:14:17.088213 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-04-10 00:14:17.089578 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-04-10 00:14:17.091229 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-04-10 00:14:17.092742 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-04-10 00:14:17.096901 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-04-10 00:14:17.343974 | orchestrator | ++ which gilt 2025-04-10 00:14:17.347094 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-04-10 00:14:17.603368 | orchestrator | + /opt/venv/bin/gilt overlay 2025-04-10 00:14:17.603503 | orchestrator | osism.cfg-generics: 2025-04-10 00:14:19.118877 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-04-10 00:14:19.119035 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-04-10 00:14:20.066920 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-04-10 00:14:20.067057 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-04-10 00:14:20.067089 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-04-10 00:14:20.067143 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-04-10 00:14:20.078686 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-04-10 00:14:20.422890 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-04-10 00:14:20.480732 | orchestrator | ~ 2025-04-10 00:14:20.481989 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-10 00:14:20.482088 | orchestrator | + deactivate 2025-04-10 00:14:20.482127 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-10 00:14:20.482144 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-10 00:14:20.482158 | orchestrator | + export PATH 2025-04-10 00:14:20.482172 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-10 00:14:20.482187 | orchestrator | + '[' -n '' ']' 2025-04-10 00:14:20.482200 | orchestrator | + hash -r 2025-04-10 00:14:20.482214 | orchestrator | + '[' -n '' ']' 2025-04-10 00:14:20.482228 | orchestrator | + unset VIRTUAL_ENV 2025-04-10 00:14:20.482242 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-10 00:14:20.482256 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-10 00:14:20.482273 | orchestrator | + unset -f deactivate 2025-04-10 00:14:20.482318 | orchestrator | + popd 2025-04-10 00:14:20.482342 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-10 00:14:20.546608 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-10 00:14:20.546713 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-10 00:14:20.546746 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-10 00:14:20.593611 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-10 00:14:20.593654 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-10 00:14:20.593676 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-10 00:14:20.593760 | orchestrator | + source /opt/venv/bin/activate 2025-04-10 00:14:20.593781 | orchestrator | ++ deactivate nondestructive 2025-04-10 00:14:20.593938 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:20.593957 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:20.593984 | orchestrator | ++ hash -r 2025-04-10 00:14:20.594113 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:20.594201 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-10 00:14:20.594230 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-10 00:14:20.594246 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-10 00:14:20.594501 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-10 00:14:20.594613 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-10 00:14:20.594629 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-10 00:14:20.594642 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-10 00:14:20.594659 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-10 00:14:20.594766 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-10 00:14:20.594785 | orchestrator | ++ export PATH 2025-04-10 00:14:20.594871 | orchestrator | ++ '[' -n '' ']' 2025-04-10 00:14:20.594932 | orchestrator | ++ '[' -z '' ']' 2025-04-10 00:14:20.595066 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-10 00:14:20.595083 | orchestrator | ++ PS1='(venv) ' 2025-04-10 00:14:20.595099 | orchestrator | ++ export PS1 2025-04-10 00:14:20.595167 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-10 00:14:20.595184 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-10 00:14:20.595201 | orchestrator | ++ hash -r 2025-04-10 00:14:20.595217 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-10 00:14:22.037482 | orchestrator | 2025-04-10 00:14:22.668699 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-10 00:14:22.668828 | orchestrator | 2025-04-10 00:14:22.668850 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-10 00:14:22.668883 | orchestrator | ok: [testbed-manager] 2025-04-10 00:14:23.765565 | orchestrator | 2025-04-10 00:14:23.765693 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-10 00:14:23.765732 | orchestrator | changed: [testbed-manager] 2025-04-10 00:14:27.043998 | orchestrator | 2025-04-10 00:14:27.044104 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-10 00:14:27.044113 | orchestrator | 2025-04-10 00:14:27.044119 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:14:27.044136 | orchestrator | ok: [testbed-manager] 2025-04-10 00:14:33.106791 | orchestrator | 2025-04-10 00:14:33.106931 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-10 00:14:33.106996 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-10 00:16:00.871535 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-04-10 00:16:00.871682 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-04-10 00:16:00.871704 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-04-10 00:16:00.871720 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-04-10 00:16:00.871737 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-04-10 00:16:00.871751 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-04-10 00:16:00.871766 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-04-10 00:16:00.871780 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-04-10 00:16:00.871802 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-04-10 00:16:00.871818 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-04-10 00:16:00.871833 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-04-10 00:16:00.871847 | orchestrator | 2025-04-10 00:16:00.871863 | orchestrator | TASK [Check status] ************************************************************ 2025-04-10 00:16:00.871900 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-10 00:16:00.928012 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-10 00:16:00.928119 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-10 00:16:00.928136 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-10 00:16:00.928150 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-04-10 00:16:00.928169 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j182986341155.1590', 'results_file': '/home/dragon/.ansible_async/j182986341155.1590', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928201 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j364588238797.1615', 'results_file': '/home/dragon/.ansible_async/j364588238797.1615', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928216 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-10 00:16:00.928231 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j868320744057.1640', 'results_file': '/home/dragon/.ansible_async/j868320744057.1640', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928281 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j538478576268.1672', 'results_file': '/home/dragon/.ansible_async/j538478576268.1672', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928301 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-10 00:16:00.928317 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j381582770965.1705', 'results_file': '/home/dragon/.ansible_async/j381582770965.1705', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928331 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j537580778857.1745', 'results_file': '/home/dragon/.ansible_async/j537580778857.1745', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928377 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-10 00:16:00.928392 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j634061884024.1771', 'results_file': '/home/dragon/.ansible_async/j634061884024.1771', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928407 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j441219976909.1812', 'results_file': '/home/dragon/.ansible_async/j441219976909.1812', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928422 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j404245088535.1838', 'results_file': '/home/dragon/.ansible_async/j404245088535.1838', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928436 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j876072845754.1871', 'results_file': '/home/dragon/.ansible_async/j876072845754.1871', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928451 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j817072721831.1910', 'results_file': '/home/dragon/.ansible_async/j817072721831.1910', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928465 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j403761555493.1937', 'results_file': '/home/dragon/.ansible_async/j403761555493.1937', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-04-10 00:16:00.928479 | orchestrator | 2025-04-10 00:16:00.928494 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-10 00:16:00.928524 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:01.429408 | orchestrator | 2025-04-10 00:16:01.429529 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-10 00:16:01.429565 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:01.794393 | orchestrator | 2025-04-10 00:16:01.794519 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-10 00:16:01.794558 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:02.163127 | orchestrator | 2025-04-10 00:16:02.163269 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-10 00:16:02.163303 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:02.209977 | orchestrator | 2025-04-10 00:16:02.210088 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-10 00:16:02.210118 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:16:02.557539 | orchestrator | 2025-04-10 00:16:02.557660 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-10 00:16:02.557695 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:02.735327 | orchestrator | 2025-04-10 00:16:02.735466 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-10 00:16:02.735514 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:16:04.608864 | orchestrator | 2025-04-10 00:16:04.608991 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-10 00:16:04.609012 | orchestrator | 2025-04-10 00:16:04.609028 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:16:04.609062 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:04.826626 | orchestrator | 2025-04-10 00:16:04.826744 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-10 00:16:04.826779 | orchestrator | 2025-04-10 00:16:04.936758 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-10 00:16:04.936890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-10 00:16:06.089637 | orchestrator | 2025-04-10 00:16:06.089770 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-10 00:16:06.089810 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-10 00:16:08.035621 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-10 00:16:08.035762 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-10 00:16:08.035783 | orchestrator | 2025-04-10 00:16:08.035799 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-10 00:16:08.035846 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-10 00:16:08.743745 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-10 00:16:08.743867 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-10 00:16:08.743887 | orchestrator | 2025-04-10 00:16:08.743902 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-10 00:16:08.743932 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:16:09.432004 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:09.432127 | orchestrator | 2025-04-10 00:16:09.432147 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-10 00:16:09.432179 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:16:09.501716 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:09.501810 | orchestrator | 2025-04-10 00:16:09.501822 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-10 00:16:09.501845 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:16:09.896947 | orchestrator | 2025-04-10 00:16:09.897067 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-10 00:16:09.897102 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:10.018002 | orchestrator | 2025-04-10 00:16:10.018168 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-10 00:16:10.018203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-10 00:16:11.085656 | orchestrator | 2025-04-10 00:16:11.085806 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-10 00:16:11.085860 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:12.052407 | orchestrator | 2025-04-10 00:16:12.052527 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-10 00:16:12.052564 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:15.306519 | orchestrator | 2025-04-10 00:16:15.306625 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-10 00:16:15.306654 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:15.628871 | orchestrator | 2025-04-10 00:16:15.628988 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-10 00:16:15.629026 | orchestrator | 2025-04-10 00:16:15.741631 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-10 00:16:15.741696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-10 00:16:22.215783 | orchestrator | 2025-04-10 00:16:22.215966 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-10 00:16:22.216007 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:22.379762 | orchestrator | 2025-04-10 00:16:22.379871 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-10 00:16:22.379905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-10 00:16:23.577115 | orchestrator | 2025-04-10 00:16:23.577278 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-10 00:16:23.577315 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-10 00:16:23.697714 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-10 00:16:23.697782 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-10 00:16:23.697798 | orchestrator | 2025-04-10 00:16:23.697813 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-10 00:16:23.697843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-10 00:16:24.407001 | orchestrator | 2025-04-10 00:16:24.407124 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-10 00:16:24.407161 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-10 00:16:25.081804 | orchestrator | 2025-04-10 00:16:25.081939 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-10 00:16:25.082003 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:25.751842 | orchestrator | 2025-04-10 00:16:25.751966 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-10 00:16:25.752004 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:16:26.172716 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:26.172838 | orchestrator | 2025-04-10 00:16:26.172857 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-10 00:16:26.172888 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:26.548414 | orchestrator | 2025-04-10 00:16:26.548529 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-10 00:16:26.548564 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:26.601437 | orchestrator | 2025-04-10 00:16:26.601525 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-10 00:16:26.601567 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:16:27.275384 | orchestrator | 2025-04-10 00:16:27.275506 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-10 00:16:27.275541 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:27.394521 | orchestrator | 2025-04-10 00:16:27.394660 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-10 00:16:27.394699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-10 00:16:28.205104 | orchestrator | 2025-04-10 00:16:28.205290 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-10 00:16:28.205357 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-10 00:16:28.917868 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-10 00:16:28.917987 | orchestrator | 2025-04-10 00:16:28.918006 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-10 00:16:28.918098 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-10 00:16:29.623307 | orchestrator | 2025-04-10 00:16:29.623433 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-10 00:16:29.623466 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:29.707565 | orchestrator | 2025-04-10 00:16:29.707674 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-10 00:16:29.707698 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:16:30.365074 | orchestrator | 2025-04-10 00:16:30.365190 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-10 00:16:30.365221 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:32.290600 | orchestrator | 2025-04-10 00:16:32.291543 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-10 00:16:32.291599 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:16:38.499289 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:16:38.499420 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:16:38.499437 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:38.499452 | orchestrator | 2025-04-10 00:16:38.499466 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-10 00:16:38.499495 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-10 00:16:39.151299 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-10 00:16:39.151423 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-10 00:16:39.151442 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-10 00:16:39.151458 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-10 00:16:39.151473 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-10 00:16:39.151488 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-10 00:16:39.151534 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-10 00:16:39.151549 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-10 00:16:39.151564 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-10 00:16:39.151579 | orchestrator | 2025-04-10 00:16:39.151594 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-10 00:16:39.151625 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-10 00:16:39.353705 | orchestrator | 2025-04-10 00:16:39.353826 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-10 00:16:39.353861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-10 00:16:40.107751 | orchestrator | 2025-04-10 00:16:40.107872 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-10 00:16:40.107906 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:40.774295 | orchestrator | 2025-04-10 00:16:40.774399 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-10 00:16:40.774425 | orchestrator | ok: [testbed-manager] 2025-04-10 00:16:41.547472 | orchestrator | 2025-04-10 00:16:41.547598 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-10 00:16:41.547635 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:47.262169 | orchestrator | 2025-04-10 00:16:47.262318 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-10 00:16:47.262358 | orchestrator | changed: [testbed-manager] 2025-04-10 00:16:48.293618 | orchestrator | 2025-04-10 00:16:48.293803 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-10 00:16:48.293842 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:10.577705 | orchestrator | 2025-04-10 00:17:10.577846 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-10 00:17:10.577885 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-10 00:17:10.665766 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:10.665848 | orchestrator | 2025-04-10 00:17:10.665866 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-10 00:17:10.665896 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:10.726805 | orchestrator | 2025-04-10 00:17:10.726906 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-10 00:17:10.726923 | orchestrator | 2025-04-10 00:17:10.726939 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-10 00:17:10.726966 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:10.828989 | orchestrator | 2025-04-10 00:17:10.829092 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-10 00:17:10.829124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-10 00:17:11.709190 | orchestrator | 2025-04-10 00:17:11.709361 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-10 00:17:11.709400 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:11.817695 | orchestrator | 2025-04-10 00:17:11.817806 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-10 00:17:11.817840 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:11.900787 | orchestrator | 2025-04-10 00:17:11.900899 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-10 00:17:11.900931 | orchestrator | ok: [testbed-manager] => { 2025-04-10 00:17:12.605624 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-10 00:17:12.605744 | orchestrator | } 2025-04-10 00:17:12.605763 | orchestrator | 2025-04-10 00:17:12.605779 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-10 00:17:12.605810 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:13.599910 | orchestrator | 2025-04-10 00:17:13.600034 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-10 00:17:13.600070 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:13.696079 | orchestrator | 2025-04-10 00:17:13.696121 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-10 00:17:13.696146 | orchestrator | ok: [testbed-manager] 2025-04-10 00:17:13.756102 | orchestrator | 2025-04-10 00:17:13.756146 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-10 00:17:13.756184 | orchestrator | ok: [testbed-manager] => { 2025-04-10 00:17:13.818117 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-10 00:17:13.818180 | orchestrator | } 2025-04-10 00:17:13.818195 | orchestrator | 2025-04-10 00:17:13.818239 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-10 00:17:13.818268 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:13.881054 | orchestrator | 2025-04-10 00:17:13.881086 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-10 00:17:13.881107 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:13.953482 | orchestrator | 2025-04-10 00:17:13.953519 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-10 00:17:13.953541 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:14.023599 | orchestrator | 2025-04-10 00:17:14.023723 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-10 00:17:14.023762 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:14.085849 | orchestrator | 2025-04-10 00:17:14.085961 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-10 00:17:14.085993 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:14.159080 | orchestrator | 2025-04-10 00:17:14.159178 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-10 00:17:14.159204 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:17:16.531028 | orchestrator | 2025-04-10 00:17:16.531172 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-10 00:17:16.531290 | orchestrator | changed: [testbed-manager] 2025-04-10 00:17:16.656927 | orchestrator | 2025-04-10 00:17:16.657038 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-10 00:17:16.657071 | orchestrator | ok: [testbed-manager] 2025-04-10 00:18:16.724532 | orchestrator | 2025-04-10 00:18:16.724705 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-10 00:18:16.724760 | orchestrator | Pausing for 60 seconds 2025-04-10 00:18:16.816354 | orchestrator | changed: [testbed-manager] 2025-04-10 00:18:16.816470 | orchestrator | 2025-04-10 00:18:16.816488 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-10 00:18:16.816518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-10 00:23:00.636359 | orchestrator | 2025-04-10 00:23:00.636791 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-10 00:23:00.636874 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-10 00:23:02.773895 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-10 00:23:02.774077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-10 00:23:02.774100 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-10 00:23:02.774115 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-10 00:23:02.774130 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-10 00:23:02.774189 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-10 00:23:02.774205 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-10 00:23:02.774219 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-10 00:23:02.774234 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-10 00:23:02.774278 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-10 00:23:02.774293 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-10 00:23:02.774308 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-10 00:23:02.774322 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-10 00:23:02.774336 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-10 00:23:02.774350 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-10 00:23:02.774365 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-10 00:23:02.774379 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-10 00:23:02.774393 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-10 00:23:02.774419 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-10 00:23:02.774434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-10 00:23:02.774452 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-04-10 00:23:02.774468 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-04-10 00:23:02.774484 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-04-10 00:23:02.774501 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-04-10 00:23:02.774517 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-04-10 00:23:02.774532 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-04-10 00:23:02.774548 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:02.774565 | orchestrator | 2025-04-10 00:23:02.774582 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-10 00:23:02.774598 | orchestrator | 2025-04-10 00:23:02.774615 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:23:02.774647 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:02.906729 | orchestrator | 2025-04-10 00:23:02.906853 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-10 00:23:02.906887 | orchestrator | 2025-04-10 00:23:03.003302 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-10 00:23:03.003470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-10 00:23:04.864123 | orchestrator | 2025-04-10 00:23:04.864308 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-10 00:23:04.864346 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:04.922522 | orchestrator | 2025-04-10 00:23:04.922612 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-10 00:23:04.922631 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:05.030440 | orchestrator | 2025-04-10 00:23:05.030556 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-10 00:23:05.030590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-10 00:23:08.005200 | orchestrator | 2025-04-10 00:23:08.005333 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-10 00:23:08.005372 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-10 00:23:08.723359 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-10 00:23:08.723514 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-10 00:23:08.723534 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-10 00:23:08.723548 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-10 00:23:08.723563 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-10 00:23:08.723577 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-10 00:23:08.723592 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-10 00:23:08.723605 | orchestrator | 2025-04-10 00:23:08.723620 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-10 00:23:08.723652 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:08.816460 | orchestrator | 2025-04-10 00:23:08.816586 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-10 00:23:08.816626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-10 00:23:10.088620 | orchestrator | 2025-04-10 00:23:10.088746 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-10 00:23:10.088783 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-10 00:23:10.767349 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-10 00:23:10.767525 | orchestrator | 2025-04-10 00:23:10.767550 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-10 00:23:10.767606 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:10.823949 | orchestrator | 2025-04-10 00:23:10.824015 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-10 00:23:10.824042 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:23:10.922535 | orchestrator | 2025-04-10 00:23:10.922656 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-10 00:23:10.922693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-10 00:23:12.389646 | orchestrator | 2025-04-10 00:23:12.389763 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-10 00:23:12.389795 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:23:13.074426 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:23:13.074546 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:13.074566 | orchestrator | 2025-04-10 00:23:13.074582 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-10 00:23:13.074613 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:13.175933 | orchestrator | 2025-04-10 00:23:13.176023 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-10 00:23:13.176042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-10 00:23:13.847049 | orchestrator | 2025-04-10 00:23:13.847262 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-10 00:23:13.847304 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:23:14.498349 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:14.498461 | orchestrator | 2025-04-10 00:23:14.498478 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-10 00:23:14.498501 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:14.625836 | orchestrator | 2025-04-10 00:23:14.625932 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-10 00:23:14.625953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-10 00:23:15.202335 | orchestrator | 2025-04-10 00:23:15.202481 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-10 00:23:15.203179 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:15.675616 | orchestrator | 2025-04-10 00:23:15.675736 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-10 00:23:15.675776 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:17.091536 | orchestrator | 2025-04-10 00:23:17.091663 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-10 00:23:17.091730 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-10 00:23:17.826994 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-10 00:23:17.827118 | orchestrator | 2025-04-10 00:23:17.827176 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-10 00:23:17.827210 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:18.256993 | orchestrator | 2025-04-10 00:23:18.257100 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-10 00:23:18.257132 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:18.628079 | orchestrator | 2025-04-10 00:23:18.628214 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-10 00:23:18.628245 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:18.666618 | orchestrator | 2025-04-10 00:23:18.666740 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-10 00:23:18.666779 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:23:18.806194 | orchestrator | 2025-04-10 00:23:18.806319 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-10 00:23:18.806356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-10 00:23:18.853278 | orchestrator | 2025-04-10 00:23:18.853362 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-10 00:23:18.853391 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:20.989705 | orchestrator | 2025-04-10 00:23:20.989837 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-10 00:23:20.989874 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-10 00:23:21.738918 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-10 00:23:21.739024 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-10 00:23:21.739037 | orchestrator | 2025-04-10 00:23:21.739047 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-10 00:23:21.739071 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:22.516813 | orchestrator | 2025-04-10 00:23:22.516938 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-10 00:23:22.516975 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:22.612555 | orchestrator | 2025-04-10 00:23:22.612670 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-10 00:23:22.612703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-10 00:23:22.668392 | orchestrator | 2025-04-10 00:23:22.668474 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-10 00:23:22.668504 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:23.462343 | orchestrator | 2025-04-10 00:23:23.462462 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-10 00:23:23.462494 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-10 00:23:23.558106 | orchestrator | 2025-04-10 00:23:23.558266 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-10 00:23:23.558301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-10 00:23:24.322398 | orchestrator | 2025-04-10 00:23:24.322549 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-10 00:23:24.322608 | orchestrator | changed: [testbed-manager] 2025-04-10 00:23:25.033111 | orchestrator | 2025-04-10 00:23:25.033289 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-10 00:23:25.033328 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:25.088971 | orchestrator | 2025-04-10 00:23:25.089096 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-10 00:23:25.089133 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:23:25.151519 | orchestrator | 2025-04-10 00:23:25.151631 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-10 00:23:25.151664 | orchestrator | ok: [testbed-manager] 2025-04-10 00:23:26.076310 | orchestrator | 2025-04-10 00:23:26.076512 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-10 00:23:26.077389 | orchestrator | changed: [testbed-manager] 2025-04-10 00:24:06.566620 | orchestrator | 2025-04-10 00:24:06.566883 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-10 00:24:06.566939 | orchestrator | changed: [testbed-manager] 2025-04-10 00:24:07.298651 | orchestrator | 2025-04-10 00:24:07.298795 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-10 00:24:07.298834 | orchestrator | ok: [testbed-manager] 2025-04-10 00:24:09.979434 | orchestrator | 2025-04-10 00:24:09.979559 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-10 00:24:09.979596 | orchestrator | changed: [testbed-manager] 2025-04-10 00:24:10.054002 | orchestrator | 2025-04-10 00:24:10.054191 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-10 00:24:10.054225 | orchestrator | ok: [testbed-manager] 2025-04-10 00:24:10.131248 | orchestrator | 2025-04-10 00:24:10.131349 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-10 00:24:10.131366 | orchestrator | 2025-04-10 00:24:10.131381 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-10 00:24:10.131410 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:25:10.203438 | orchestrator | 2025-04-10 00:25:10.203629 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-10 00:25:10.203676 | orchestrator | Pausing for 60 seconds 2025-04-10 00:25:15.229395 | orchestrator | changed: [testbed-manager] 2025-04-10 00:25:15.229542 | orchestrator | 2025-04-10 00:25:15.229600 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-10 00:25:15.229634 | orchestrator | changed: [testbed-manager] 2025-04-10 00:25:56.980203 | orchestrator | 2025-04-10 00:25:56.980384 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-10 00:25:56.980426 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-10 00:26:03.917413 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-10 00:26:03.917558 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:03.917579 | orchestrator | 2025-04-10 00:26:03.917594 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-10 00:26:03.917674 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:04.008161 | orchestrator | 2025-04-10 00:26:04.008246 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-10 00:26:04.008264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-10 00:26:04.082187 | orchestrator | 2025-04-10 00:26:04.082285 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-10 00:26:04.082296 | orchestrator | 2025-04-10 00:26:04.082304 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-10 00:26:04.082324 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:26:04.238780 | orchestrator | 2025-04-10 00:26:04.238884 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:26:04.238900 | orchestrator | testbed-manager : ok=105 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-10 00:26:04.238913 | orchestrator | 2025-04-10 00:26:04.238940 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-10 00:26:04.247062 | orchestrator | + deactivate 2025-04-10 00:26:04.247112 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-10 00:26:04.247129 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-10 00:26:04.247170 | orchestrator | + export PATH 2025-04-10 00:26:04.247184 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-10 00:26:04.247195 | orchestrator | + '[' -n '' ']' 2025-04-10 00:26:04.247207 | orchestrator | + hash -r 2025-04-10 00:26:04.247218 | orchestrator | + '[' -n '' ']' 2025-04-10 00:26:04.247229 | orchestrator | + unset VIRTUAL_ENV 2025-04-10 00:26:04.247240 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-10 00:26:04.247252 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-10 00:26:04.247263 | orchestrator | + unset -f deactivate 2025-04-10 00:26:04.247304 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-10 00:26:04.247326 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-10 00:26:04.248104 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-10 00:26:04.248206 | orchestrator | + local max_attempts=60 2025-04-10 00:26:04.248222 | orchestrator | + local name=ceph-ansible 2025-04-10 00:26:04.248234 | orchestrator | + local attempt_num=1 2025-04-10 00:26:04.248254 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-10 00:26:04.277293 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-10 00:26:04.278739 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-10 00:26:04.278850 | orchestrator | + local max_attempts=60 2025-04-10 00:26:04.278884 | orchestrator | + local name=kolla-ansible 2025-04-10 00:26:04.278900 | orchestrator | + local attempt_num=1 2025-04-10 00:26:04.278930 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-10 00:26:04.317729 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-10 00:26:04.318262 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-10 00:26:04.318298 | orchestrator | + local max_attempts=60 2025-04-10 00:26:04.318315 | orchestrator | + local name=osism-ansible 2025-04-10 00:26:04.318329 | orchestrator | + local attempt_num=1 2025-04-10 00:26:04.318353 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-10 00:26:04.359888 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-10 00:26:05.063002 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-10 00:26:05.063122 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-10 00:26:05.063186 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-10 00:26:05.126183 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-10 00:26:05.324881 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-10 00:26:05.324995 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-10 00:26:05.325079 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-10 00:26:05.330694 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330725 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330740 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-04-10 00:26:05.330776 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-04-10 00:26:05.330791 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330809 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330824 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330837 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 50 seconds (healthy) 2025-04-10 00:26:05.330851 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330865 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-04-10 00:26:05.330905 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330919 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330933 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-04-10 00:26:05.330947 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330961 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330975 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.330989 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-04-10 00:26:05.331011 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-10 00:26:05.469492 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-10 00:26:05.477084 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-04-10 00:26:05.477166 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-04-10 00:26:05.477182 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-04-10 00:26:05.477195 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-04-10 00:26:05.477216 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-10 00:26:05.525540 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-10 00:26:05.529282 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-10 00:26:05.529324 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-10 00:26:07.223317 | orchestrator | 2025-04-10 00:26:07 | INFO  | Task 925e1b70-995b-49c0-96bf-b7bf26e9f624 (resolvconf) was prepared for execution. 2025-04-10 00:26:10.430287 | orchestrator | 2025-04-10 00:26:07 | INFO  | It takes a moment until task 925e1b70-995b-49c0-96bf-b7bf26e9f624 (resolvconf) has been started and output is visible here. 2025-04-10 00:26:10.430423 | orchestrator | 2025-04-10 00:26:10.430561 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-10 00:26:10.430659 | orchestrator | 2025-04-10 00:26:10.431989 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:26:10.433049 | orchestrator | Thursday 10 April 2025 00:26:10 +0000 (0:00:00.097) 0:00:00.097 ******** 2025-04-10 00:26:14.644596 | orchestrator | ok: [testbed-manager] 2025-04-10 00:26:14.645447 | orchestrator | 2025-04-10 00:26:14.646316 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-10 00:26:14.646882 | orchestrator | Thursday 10 April 2025 00:26:14 +0000 (0:00:04.219) 0:00:04.317 ******** 2025-04-10 00:26:14.708373 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:26:14.708937 | orchestrator | 2025-04-10 00:26:14.709262 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-10 00:26:14.710134 | orchestrator | Thursday 10 April 2025 00:26:14 +0000 (0:00:00.064) 0:00:04.381 ******** 2025-04-10 00:26:14.792748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-10 00:26:14.793303 | orchestrator | 2025-04-10 00:26:14.793465 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-10 00:26:14.795164 | orchestrator | Thursday 10 April 2025 00:26:14 +0000 (0:00:00.084) 0:00:04.466 ******** 2025-04-10 00:26:14.877006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-10 00:26:14.877360 | orchestrator | 2025-04-10 00:26:14.877879 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-10 00:26:14.878295 | orchestrator | Thursday 10 April 2025 00:26:14 +0000 (0:00:00.082) 0:00:04.548 ******** 2025-04-10 00:26:16.110686 | orchestrator | ok: [testbed-manager] 2025-04-10 00:26:16.111083 | orchestrator | 2025-04-10 00:26:16.111388 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-10 00:26:16.111423 | orchestrator | Thursday 10 April 2025 00:26:16 +0000 (0:00:01.233) 0:00:05.782 ******** 2025-04-10 00:26:16.166958 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:26:16.168451 | orchestrator | 2025-04-10 00:26:16.168551 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-10 00:26:16.168930 | orchestrator | Thursday 10 April 2025 00:26:16 +0000 (0:00:00.056) 0:00:05.839 ******** 2025-04-10 00:26:16.698724 | orchestrator | ok: [testbed-manager] 2025-04-10 00:26:16.701819 | orchestrator | 2025-04-10 00:26:16.701927 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-10 00:26:16.701961 | orchestrator | Thursday 10 April 2025 00:26:16 +0000 (0:00:00.529) 0:00:06.368 ******** 2025-04-10 00:26:16.776351 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:26:17.378104 | orchestrator | 2025-04-10 00:26:17.378251 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-10 00:26:17.378272 | orchestrator | Thursday 10 April 2025 00:26:16 +0000 (0:00:00.077) 0:00:06.446 ******** 2025-04-10 00:26:17.378305 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:17.378603 | orchestrator | 2025-04-10 00:26:17.379054 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-10 00:26:17.379785 | orchestrator | Thursday 10 April 2025 00:26:17 +0000 (0:00:00.603) 0:00:07.050 ******** 2025-04-10 00:26:18.586628 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:18.587059 | orchestrator | 2025-04-10 00:26:19.581297 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-10 00:26:19.581423 | orchestrator | Thursday 10 April 2025 00:26:18 +0000 (0:00:01.208) 0:00:08.258 ******** 2025-04-10 00:26:19.581461 | orchestrator | ok: [testbed-manager] 2025-04-10 00:26:19.582132 | orchestrator | 2025-04-10 00:26:19.582706 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-10 00:26:19.584034 | orchestrator | Thursday 10 April 2025 00:26:19 +0000 (0:00:00.992) 0:00:09.251 ******** 2025-04-10 00:26:19.664011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-10 00:26:19.664542 | orchestrator | 2025-04-10 00:26:19.664898 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-10 00:26:19.665476 | orchestrator | Thursday 10 April 2025 00:26:19 +0000 (0:00:00.085) 0:00:09.336 ******** 2025-04-10 00:26:20.886787 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:20.886988 | orchestrator | 2025-04-10 00:26:20.888095 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:26:20.888698 | orchestrator | 2025-04-10 00:26:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:26:20.889221 | orchestrator | 2025-04-10 00:26:20 | INFO  | Please wait and do not abort execution. 2025-04-10 00:26:20.890349 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:26:20.892386 | orchestrator | 2025-04-10 00:26:20.893072 | orchestrator | Thursday 10 April 2025 00:26:20 +0000 (0:00:01.221) 0:00:10.558 ******** 2025-04-10 00:26:20.893804 | orchestrator | =============================================================================== 2025-04-10 00:26:20.894311 | orchestrator | Gathering Facts --------------------------------------------------------- 4.22s 2025-04-10 00:26:20.894696 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.23s 2025-04-10 00:26:20.894956 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2025-04-10 00:26:20.896112 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.21s 2025-04-10 00:26:20.896370 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-04-10 00:26:20.896394 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2025-04-10 00:26:20.896947 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2025-04-10 00:26:20.897209 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-04-10 00:26:20.897713 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-04-10 00:26:20.898125 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-04-10 00:26:20.898611 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-04-10 00:26:20.898962 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-04-10 00:26:20.899447 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-04-10 00:26:21.355006 | orchestrator | + osism apply sshconfig 2025-04-10 00:26:22.847776 | orchestrator | 2025-04-10 00:26:22 | INFO  | Task 78675b9f-6d1d-4301-ba2e-2631200d9169 (sshconfig) was prepared for execution. 2025-04-10 00:26:26.072692 | orchestrator | 2025-04-10 00:26:22 | INFO  | It takes a moment until task 78675b9f-6d1d-4301-ba2e-2631200d9169 (sshconfig) has been started and output is visible here. 2025-04-10 00:26:26.072928 | orchestrator | 2025-04-10 00:26:26.073560 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-10 00:26:26.073604 | orchestrator | 2025-04-10 00:26:26.074492 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-10 00:26:26.075535 | orchestrator | Thursday 10 April 2025 00:26:26 +0000 (0:00:00.126) 0:00:00.126 ******** 2025-04-10 00:26:26.663484 | orchestrator | ok: [testbed-manager] 2025-04-10 00:26:26.664031 | orchestrator | 2025-04-10 00:26:27.183488 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-10 00:26:27.183616 | orchestrator | Thursday 10 April 2025 00:26:26 +0000 (0:00:00.594) 0:00:00.721 ******** 2025-04-10 00:26:27.183653 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:27.184690 | orchestrator | 2025-04-10 00:26:27.184761 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-10 00:26:27.184787 | orchestrator | Thursday 10 April 2025 00:26:27 +0000 (0:00:00.518) 0:00:01.239 ******** 2025-04-10 00:26:33.290908 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-10 00:26:33.291167 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-10 00:26:33.291767 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-10 00:26:33.291977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-10 00:26:33.292007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-10 00:26:33.292615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-10 00:26:33.292832 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-10 00:26:33.293209 | orchestrator | 2025-04-10 00:26:33.294551 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-10 00:26:33.294753 | orchestrator | Thursday 10 April 2025 00:26:33 +0000 (0:00:06.107) 0:00:07.346 ******** 2025-04-10 00:26:33.370380 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:26:33.370829 | orchestrator | 2025-04-10 00:26:33.370865 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-10 00:26:33.371900 | orchestrator | Thursday 10 April 2025 00:26:33 +0000 (0:00:00.080) 0:00:07.427 ******** 2025-04-10 00:26:33.969210 | orchestrator | changed: [testbed-manager] 2025-04-10 00:26:33.970392 | orchestrator | 2025-04-10 00:26:33.971258 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:26:33.971472 | orchestrator | 2025-04-10 00:26:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:26:33.971719 | orchestrator | 2025-04-10 00:26:33 | INFO  | Please wait and do not abort execution. 2025-04-10 00:26:33.972702 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:26:33.973279 | orchestrator | 2025-04-10 00:26:33.973931 | orchestrator | Thursday 10 April 2025 00:26:33 +0000 (0:00:00.599) 0:00:08.026 ******** 2025-04-10 00:26:33.974254 | orchestrator | =============================================================================== 2025-04-10 00:26:33.974780 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.11s 2025-04-10 00:26:33.975503 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-04-10 00:26:33.976087 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-04-10 00:26:33.976428 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2025-04-10 00:26:33.977225 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-04-10 00:26:34.412210 | orchestrator | + osism apply known-hosts 2025-04-10 00:26:35.879123 | orchestrator | 2025-04-10 00:26:35 | INFO  | Task 32173b22-a57e-48ee-85e3-62b56ab7f5b9 (known-hosts) was prepared for execution. 2025-04-10 00:26:39.075279 | orchestrator | 2025-04-10 00:26:35 | INFO  | It takes a moment until task 32173b22-a57e-48ee-85e3-62b56ab7f5b9 (known-hosts) has been started and output is visible here. 2025-04-10 00:26:39.075429 | orchestrator | 2025-04-10 00:26:39.077702 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-10 00:26:39.077827 | orchestrator | 2025-04-10 00:26:45.074278 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-10 00:26:45.074404 | orchestrator | Thursday 10 April 2025 00:26:39 +0000 (0:00:00.113) 0:00:00.113 ******** 2025-04-10 00:26:45.074545 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-10 00:26:45.075532 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-10 00:26:45.075558 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-10 00:26:45.075578 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-10 00:26:45.076296 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-10 00:26:45.076900 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-10 00:26:45.077303 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-10 00:26:45.077323 | orchestrator | 2025-04-10 00:26:45.077819 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-10 00:26:45.078225 | orchestrator | Thursday 10 April 2025 00:26:45 +0000 (0:00:05.997) 0:00:06.110 ******** 2025-04-10 00:26:45.230444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-10 00:26:45.230682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-10 00:26:45.230714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-10 00:26:45.231279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-10 00:26:45.232098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-10 00:26:45.232806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-10 00:26:45.233041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-10 00:26:45.233903 | orchestrator | 2025-04-10 00:26:45.234584 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:45.235663 | orchestrator | Thursday 10 April 2025 00:26:45 +0000 (0:00:00.159) 0:00:06.270 ******** 2025-04-10 00:26:46.491211 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClOTTpziblaW1hFRB17iKhUn1MJb7NsYcX/lTEYCKu9Snlx+FjzcbuXOnrRkv5xtKVLLGFHHyTt+U8L3477xx2RlJmCN5dcwontKL1PfieyeGO27i7XiVG0Tcdd8BHfpKOqhaCDYsmRNMYAd5b7hHTANc5JQrTUPUQQSiuom1yzRnaya8vjq5v06wOfygDYwXAoXiC8nlwB/ssf99WKTGhtSPyCtxDDrMbbCRByenxSyfCuE4qwteIXKwupawGQismZxUoixmu9aleQbu71aJbkonKTtIkiuRSEmokSXV9GdHBXKJDpwnV+6f+lo8v5y7YjuZOHrA+DIjc+wZ1NLaFf5ZS+eXKXaUzONpauWfQT29XGoMLt3gWRh0WwwVmdRIj0VjQOdNfMErA5+itgBaqZKISdLTMfH+CYCeKeDbbolBtp7c1pg4m6ZCKIUdGdP98YePoDFS0OE202gaB3rASeIcHcrEnJqX/WgaJQ5g6ZTCj59+sz7h828D3kBX6hkk=) 2025-04-10 00:26:46.491857 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObZ16EHiOeJ0Sp3I7I45uTBoh4AdAaLggB4f6b1dhAJ0wDQo4d8cIF3n3/Kl64ZICM/F7yY2I0TGTkbicV3Hfk=) 2025-04-10 00:26:46.492639 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQGkTrtJRAvIgb3qODsZaXp9DLMNey+XB+xlMZWR5XZ) 2025-04-10 00:26:46.492900 | orchestrator | 2025-04-10 00:26:46.493442 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:46.493942 | orchestrator | Thursday 10 April 2025 00:26:46 +0000 (0:00:01.258) 0:00:07.528 ******** 2025-04-10 00:26:47.650275 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAff/OM6p7xtkRV61VO5rxS/L+pyiFz9wcJywNf0HLeS) 2025-04-10 00:26:47.650651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9z1oXrq6NOIadIDzEmd0b6LS17b6X6knqSLnlnKJ4vBENSKh4PCsXQGS2sUlKOESxsgNa3dUVlT86i4Lauq1F5xyS5iAN7bErNWV4cB0bMwNMho9EJzrHN60jAHbVnX/RUUQumIRKBtZTKg1tp/D531DluEu1OXHfFpOkLwVZ2BzAg4PC62ZXwUz48aMs440a6EMbsceQDe3YAi6QGTYTF63mRzPeFp7u5tizLPimeZaF5kf7VJZabhmLp7kjDxb5VAwkZOwg/Fe2M+CyDEjhMgbYozPF9TeMd9ExInNyakN6jv/4q7MxOUnbmH8JEjeKe+lbAMd9yEfhYA2tSlXQLCKNtVrSOiKd9Aly/VwcVMnffHkTaEFb2HPNAXcv0+EsnncFFRIp/LYgQ/6oVaY4kpdEXIZERP2SU7+ikXhOX7q4d8m5Zdm1UEZHNhSfxgBZCFn40t2VyAAR5Qg7b/xzJeNJMAaLtSnn8YUiiMxo4NPenwqkE3wL005FevO3yDM=) 2025-04-10 00:26:47.650961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdkehmzywSYmUPn6sd4P458QJ82MpMMEpSV0y5pEvrgFbblY8Z1bpm/2GszWRRKI/wMsILOXw9VZu2urnuXjNI=) 2025-04-10 00:26:47.650997 | orchestrator | 2025-04-10 00:26:47.651592 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:47.651834 | orchestrator | Thursday 10 April 2025 00:26:47 +0000 (0:00:01.160) 0:00:08.688 ******** 2025-04-10 00:26:48.773449 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrCF/UGkcte5htdRM4h07nbBbYRE5t9pvghsGPzwsafXtzsnYnXrtuw289lc4LcJ+w5I61ke4cSnoqxn/BDPHxO9CXBzBN8QdXFg4OablGwXAZfngWkgL/sviTjQHhy93Zqdvsdd0dz7rHr5/Q0Acv1/MUdINwsAlyAVVjLRR/Icy167VQUQkJaVqovixpYYJUcY5c9jllCgZNn+zbChX0iHcmz7YLOzuvOWaR+7GObFF3UwXQ/Il1kGwNqdEnvUgW2d1AJia+rGoadbOaIlvxAYf/QmQwPHhKoPTRWqKX8IeXMc+iDdVCI7cOLGBHWxKNnH3lcnA00mXfrYtJ1eurN8mrGRr5P+vbTvMSXnBbrbgRHoMYd0akbn1le2Ijme0G6BEOpyiwajy5973v88NA8nZnDDOuERmNne7NCxm1ucaVmw/DoObDesICxB4fveCGHG2JOAf8jntAGvnEtktpojWBxMsECWjAEC/hOOxRQxuTMwGCipqq/XD1oBYmcq8=) 2025-04-10 00:26:48.773679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOoWh0j09i/Ob7QXxZnVuVOL7QAk4/Pv6MF3t3gwuOnaJGM7kU/x9o12QpOpfyJjNwIrWo0XWxQP/uLLrzMtlGc=) 2025-04-10 00:26:48.774424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPITxiDNSBDUPzCvVnZ7hBC6loEEGubKrkQRUDDO5Qzv) 2025-04-10 00:26:48.774696 | orchestrator | 2025-04-10 00:26:48.774729 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:48.775870 | orchestrator | Thursday 10 April 2025 00:26:48 +0000 (0:00:01.123) 0:00:09.812 ******** 2025-04-10 00:26:49.932474 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDry6KCdwUdUVpv+u+/y1IE9L6IJhsPM6kHmpWTCJ4w9HBevJd2/Vcxdypz5AG2Pk7F+lirSDm63afjwhQ6yqFC8+UxyUbSJhW/LgVc76xE42pGYMQWXxHPOKH9iGFWM22PypwLDco/XLXp7PbGcsLn5pVxFhvDN0dc9j7cA2KhET7OAjtjgPaeNGoWIFKuZ4pg5qHnNKOpygxnED7OxL2FRVw0RotT7V8/1+8afMFu9FSgkge0sCbNwlz2Gf+jJusZBAbNKmCrk4NQy4s7g8tEQlDNtU/JmQkRyUxD5sG6dn9zJMdNNfkwhSpkRWzQYSKmz6Qe+3o9dn98J8INmBiNfbya9impFDttgG8f1r6kGwH4AOx1KKm8gbsjV1y5ySMRsrhbvJUzPkquFqd8PJVW18vIga+v3nKNpfSdH5otCuvW7mYKAnMLhWhjd9QstXCgR0ZnMrhmDtaw9Bpq+KzmX0piyxoo1Jq6gr5fRipc9Nr0hOg0Pxc+IHT7BlvwcTk=) 2025-04-10 00:26:49.934903 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCG08F79iymqnCqbhOr5cqFZR76umUSkYDiQ6FCdSqKS4g/ptSR9nWEK7k7hRLhRrW/TVs2OBjtVsA2cDIPAygI=) 2025-04-10 00:26:49.935439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5IM87Dq0eQXsnAuTkFCZMLF7WgF9ZkXgEhF3cnoa/C) 2025-04-10 00:26:49.936215 | orchestrator | 2025-04-10 00:26:49.937075 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:49.937951 | orchestrator | Thursday 10 April 2025 00:26:49 +0000 (0:00:01.157) 0:00:10.969 ******** 2025-04-10 00:26:51.034519 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRuaYlj2ZcgwpFtAxEnO4T2eeNiPFGYAgIIkVOL2Zo3XsGqZDo/3gq4AukxKRot9L31EwBn59uRknNB4B488RLa4E/2KXD4YTDTaxEsfkYr3nSdwCn3iSqUsUu09XgMN8MUoTvFZ/90C7yyEF2WS8u0prvjz+SAiR1jKy0IDitIotumRVaVISUhp4GqZ3tdeMeNkFsxQ9s+v7099IMl3b3y/UoieGeQvL8ADefn63EcuQOhFclM9sJZB5dt0SqkeDypCI+ovAdYbfuRH0FwiuD7YdxCHY+q5ohDvG1vm20ayFft2tK9/L8FzK7tpPZdfo6rUzf4cpYFAQvrbsBRgUplPO/p7A9Unxd4TEhmtTbZEFNfme83T5YG42NPne+Uq0VD2CC3xKombIs1nF8kvdkbbNUUC1rx/oMiEzu5XVG7WPTmrq5TnzJwf6/bLKppHir2UHmesLdr7MdGwoCikyB3tON8sZzJBd+uyx+nh+YoOS33MH7ofMFIIFG+3QTFoc=) 2025-04-10 00:26:51.035336 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCqJ2pI1SH2d5rovosKAftcxVJvymV/dRqr0pRmjz0DB2bHGZM13pgo8E+5Q866wvF68gwbOyDlwmIBPB89QMEo=) 2025-04-10 00:26:51.036900 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGL1VLHCdl8c/3uroHku38VSaoA+Hxy4VrKl5HE1g22+) 2025-04-10 00:26:51.038660 | orchestrator | 2025-04-10 00:26:51.039640 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:51.040135 | orchestrator | Thursday 10 April 2025 00:26:51 +0000 (0:00:01.103) 0:00:12.073 ******** 2025-04-10 00:26:52.186659 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB6iEalmWrOQCE4bW/93UlP3AuMJtj7Fcq5qbqFC1/cLJDvjY/07hZjV+UpD8iDZhvpWhGG/o/KAOukJbqiSrx2DD5edsdrKIJbLLClNqzN7FaezAwAEu7HGaXtogCSA7ALCpmUU+HHLBZ5YjdtXO+6fjbpg9ZKcocfDAKSwj3nWWb2QEoBlNEy1xEu6EOOL27rYtahAmL145v5d1IiXdDdA/tmG7azCk+bzdkLL3ydlwoqsAY0GqdiR/b2mh9kYJ/JTUb8osRYrPPHkwyNf16vsdMCEJCs8kaNDyv653wXrXOFxec3hiRz8FEC2gq/TqNK3SRuJM1J4XlDWbZEzX1S1jbMjQJW4kJONDkTgSnK2OwpP7ccJM4mJyDti3DP1dMytQRZ3bXkMsUHOUIHMMwMM5Kxx1Mgoq9yDtfM2doCcZpJKz5XlleDmBduN4Jufhi/BLoPZ1IjAb3uAaNITd3rU7/Hos3psj/FCwRvPDHgJlfDGilZNYf9+NvF359WUs=) 2025-04-10 00:26:52.187574 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBN333fqWC6TZRX2ECMF+Cf2j0x7fOhz7NtZIylXB37CJ2hP4hYylqXHNiQ3lCN/RqVBIKxgwbUaGFKYoTHq4+A=) 2025-04-10 00:26:52.187620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAPO0p3Zm2Rm7JBx3K5KIIYgmaLqMo/2BzcJK5Yu2Tp+) 2025-04-10 00:26:52.188518 | orchestrator | 2025-04-10 00:26:52.189188 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:52.191220 | orchestrator | Thursday 10 April 2025 00:26:52 +0000 (0:00:01.150) 0:00:13.224 ******** 2025-04-10 00:26:53.322333 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Q6fMijHMPPlZTGrB0SToLVvmhzc7CSCGuckEOai3Ox//lEtJ+nyRhuoXgLLbCW1sXuo13EQBTeH/wAZhNiJGM5fx8J6hFVj+Gwwfg7+FjdeCUNsTENTVDoS1XrqgcI76UmUhOauo/aE8+TmxisjgmUebJBPLE7PLcb1NqhrckLBmABhU12ZEJ4wlS7d1wIZMCLb7IGLv7LGwqdhwz8r6wxlSOTDYsvElBx8bFe86GKSVjLkJC90ifK5pxmFE7c8FsjTa2oMjrhHRSxN2g8JlXBRyxoP/AOhyAkGLXn6NB2r3pAPHWe6MzB0fN6nOX/ZhEhqDKXxuPzfO6eDiSaAWVVfyQQDxTP5rDanrUGQbXaQaXgDwhsv3mR0i9OtWFCX5R5OkVk1PLHL/wkCZKrFUKNB7WbBEmnRMIkzunKHv+a1qKJ6Kwpe92+VjL2qgXntZMcEwwjVnytyLRfdXst7hkS38kGfIzQBSjX3u+QSpoKtt+fnlywqNMulfflNjPEs=) 2025-04-10 00:26:53.323320 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCp7ui2NkziVzxkI6LHyZiAhrfV/JrpjVnI9hUzCF5XLE3zLV48TDidY5vdVoDqxQ+qXpBUliGc5E5Zv6CVnUXg=) 2025-04-10 00:26:53.323904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIYY7Gvm3o3YPsSGNOrRBHOqK9TrKyLLBmoEm3eDYjMv) 2025-04-10 00:26:53.325000 | orchestrator | 2025-04-10 00:26:53.327098 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-10 00:26:53.327868 | orchestrator | Thursday 10 April 2025 00:26:53 +0000 (0:00:01.136) 0:00:14.360 ******** 2025-04-10 00:26:58.782369 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-10 00:26:58.783496 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-10 00:26:58.783697 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-10 00:26:58.784669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-10 00:26:58.784703 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-10 00:26:58.786121 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-10 00:26:58.786786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-10 00:26:58.788049 | orchestrator | 2025-04-10 00:26:58.788498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-10 00:26:58.789340 | orchestrator | Thursday 10 April 2025 00:26:58 +0000 (0:00:05.459) 0:00:19.820 ******** 2025-04-10 00:26:58.953999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-10 00:26:58.954638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-10 00:26:58.956015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-10 00:26:58.956932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-10 00:26:58.958209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-10 00:26:58.959590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-10 00:26:58.959687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-10 00:26:58.960862 | orchestrator | 2025-04-10 00:26:58.961869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:26:58.962173 | orchestrator | Thursday 10 April 2025 00:26:58 +0000 (0:00:00.172) 0:00:19.992 ******** 2025-04-10 00:27:00.164189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClOTTpziblaW1hFRB17iKhUn1MJb7NsYcX/lTEYCKu9Snlx+FjzcbuXOnrRkv5xtKVLLGFHHyTt+U8L3477xx2RlJmCN5dcwontKL1PfieyeGO27i7XiVG0Tcdd8BHfpKOqhaCDYsmRNMYAd5b7hHTANc5JQrTUPUQQSiuom1yzRnaya8vjq5v06wOfygDYwXAoXiC8nlwB/ssf99WKTGhtSPyCtxDDrMbbCRByenxSyfCuE4qwteIXKwupawGQismZxUoixmu9aleQbu71aJbkonKTtIkiuRSEmokSXV9GdHBXKJDpwnV+6f+lo8v5y7YjuZOHrA+DIjc+wZ1NLaFf5ZS+eXKXaUzONpauWfQT29XGoMLt3gWRh0WwwVmdRIj0VjQOdNfMErA5+itgBaqZKISdLTMfH+CYCeKeDbbolBtp7c1pg4m6ZCKIUdGdP98YePoDFS0OE202gaB3rASeIcHcrEnJqX/WgaJQ5g6ZTCj59+sz7h828D3kBX6hkk=) 2025-04-10 00:27:00.164833 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObZ16EHiOeJ0Sp3I7I45uTBoh4AdAaLggB4f6b1dhAJ0wDQo4d8cIF3n3/Kl64ZICM/F7yY2I0TGTkbicV3Hfk=) 2025-04-10 00:27:00.165710 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQGkTrtJRAvIgb3qODsZaXp9DLMNey+XB+xlMZWR5XZ) 2025-04-10 00:27:00.166344 | orchestrator | 2025-04-10 00:27:00.166632 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:27:00.167059 | orchestrator | Thursday 10 April 2025 00:27:00 +0000 (0:00:01.208) 0:00:21.201 ******** 2025-04-10 00:27:01.301620 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAff/OM6p7xtkRV61VO5rxS/L+pyiFz9wcJywNf0HLeS) 2025-04-10 00:27:01.302645 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9z1oXrq6NOIadIDzEmd0b6LS17b6X6knqSLnlnKJ4vBENSKh4PCsXQGS2sUlKOESxsgNa3dUVlT86i4Lauq1F5xyS5iAN7bErNWV4cB0bMwNMho9EJzrHN60jAHbVnX/RUUQumIRKBtZTKg1tp/D531DluEu1OXHfFpOkLwVZ2BzAg4PC62ZXwUz48aMs440a6EMbsceQDe3YAi6QGTYTF63mRzPeFp7u5tizLPimeZaF5kf7VJZabhmLp7kjDxb5VAwkZOwg/Fe2M+CyDEjhMgbYozPF9TeMd9ExInNyakN6jv/4q7MxOUnbmH8JEjeKe+lbAMd9yEfhYA2tSlXQLCKNtVrSOiKd9Aly/VwcVMnffHkTaEFb2HPNAXcv0+EsnncFFRIp/LYgQ/6oVaY4kpdEXIZERP2SU7+ikXhOX7q4d8m5Zdm1UEZHNhSfxgBZCFn40t2VyAAR5Qg7b/xzJeNJMAaLtSnn8YUiiMxo4NPenwqkE3wL005FevO3yDM=) 2025-04-10 00:27:01.303493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdkehmzywSYmUPn6sd4P458QJ82MpMMEpSV0y5pEvrgFbblY8Z1bpm/2GszWRRKI/wMsILOXw9VZu2urnuXjNI=) 2025-04-10 00:27:01.304232 | orchestrator | 2025-04-10 00:27:01.305277 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:27:01.305811 | orchestrator | Thursday 10 April 2025 00:27:01 +0000 (0:00:01.138) 0:00:22.340 ******** 2025-04-10 00:27:02.442411 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrCF/UGkcte5htdRM4h07nbBbYRE5t9pvghsGPzwsafXtzsnYnXrtuw289lc4LcJ+w5I61ke4cSnoqxn/BDPHxO9CXBzBN8QdXFg4OablGwXAZfngWkgL/sviTjQHhy93Zqdvsdd0dz7rHr5/Q0Acv1/MUdINwsAlyAVVjLRR/Icy167VQUQkJaVqovixpYYJUcY5c9jllCgZNn+zbChX0iHcmz7YLOzuvOWaR+7GObFF3UwXQ/Il1kGwNqdEnvUgW2d1AJia+rGoadbOaIlvxAYf/QmQwPHhKoPTRWqKX8IeXMc+iDdVCI7cOLGBHWxKNnH3lcnA00mXfrYtJ1eurN8mrGRr5P+vbTvMSXnBbrbgRHoMYd0akbn1le2Ijme0G6BEOpyiwajy5973v88NA8nZnDDOuERmNne7NCxm1ucaVmw/DoObDesICxB4fveCGHG2JOAf8jntAGvnEtktpojWBxMsECWjAEC/hOOxRQxuTMwGCipqq/XD1oBYmcq8=) 2025-04-10 00:27:02.443399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOoWh0j09i/Ob7QXxZnVuVOL7QAk4/Pv6MF3t3gwuOnaJGM7kU/x9o12QpOpfyJjNwIrWo0XWxQP/uLLrzMtlGc=) 2025-04-10 00:27:02.443605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPITxiDNSBDUPzCvVnZ7hBC6loEEGubKrkQRUDDO5Qzv) 2025-04-10 00:27:02.445377 | orchestrator | 2025-04-10 00:27:02.446092 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:27:02.446969 | orchestrator | Thursday 10 April 2025 00:27:02 +0000 (0:00:01.139) 0:00:23.480 ******** 2025-04-10 00:27:03.584966 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5IM87Dq0eQXsnAuTkFCZMLF7WgF9ZkXgEhF3cnoa/C) 2025-04-10 00:27:03.586251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDry6KCdwUdUVpv+u+/y1IE9L6IJhsPM6kHmpWTCJ4w9HBevJd2/Vcxdypz5AG2Pk7F+lirSDm63afjwhQ6yqFC8+UxyUbSJhW/LgVc76xE42pGYMQWXxHPOKH9iGFWM22PypwLDco/XLXp7PbGcsLn5pVxFhvDN0dc9j7cA2KhET7OAjtjgPaeNGoWIFKuZ4pg5qHnNKOpygxnED7OxL2FRVw0RotT7V8/1+8afMFu9FSgkge0sCbNwlz2Gf+jJusZBAbNKmCrk4NQy4s7g8tEQlDNtU/JmQkRyUxD5sG6dn9zJMdNNfkwhSpkRWzQYSKmz6Qe+3o9dn98J8INmBiNfbya9impFDttgG8f1r6kGwH4AOx1KKm8gbsjV1y5ySMRsrhbvJUzPkquFqd8PJVW18vIga+v3nKNpfSdH5otCuvW7mYKAnMLhWhjd9QstXCgR0ZnMrhmDtaw9Bpq+KzmX0piyxoo1Jq6gr5fRipc9Nr0hOg0Pxc+IHT7BlvwcTk=) 2025-04-10 00:27:03.586979 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCG08F79iymqnCqbhOr5cqFZR76umUSkYDiQ6FCdSqKS4g/ptSR9nWEK7k7hRLhRrW/TVs2OBjtVsA2cDIPAygI=) 2025-04-10 00:27:03.588391 | orchestrator | 2025-04-10 00:27:03.589705 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:27:04.722008 | orchestrator | Thursday 10 April 2025 00:27:03 +0000 (0:00:01.143) 0:00:24.623 ******** 2025-04-10 00:27:04.722293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCqJ2pI1SH2d5rovosKAftcxVJvymV/dRqr0pRmjz0DB2bHGZM13pgo8E+5Q866wvF68gwbOyDlwmIBPB89QMEo=) 2025-04-10 00:27:04.722588 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRuaYlj2ZcgwpFtAxEnO4T2eeNiPFGYAgIIkVOL2Zo3XsGqZDo/3gq4AukxKRot9L31EwBn59uRknNB4B488RLa4E/2KXD4YTDTaxEsfkYr3nSdwCn3iSqUsUu09XgMN8MUoTvFZ/90C7yyEF2WS8u0prvjz+SAiR1jKy0IDitIotumRVaVISUhp4GqZ3tdeMeNkFsxQ9s+v7099IMl3b3y/UoieGeQvL8ADefn63EcuQOhFclM9sJZB5dt0SqkeDypCI+ovAdYbfuRH0FwiuD7YdxCHY+q5ohDvG1vm20ayFft2tK9/L8FzK7tpPZdfo6rUzf4cpYFAQvrbsBRgUplPO/p7A9Unxd4TEhmtTbZEFNfme83T5YG42NPne+Uq0VD2CC3xKombIs1nF8kvdkbbNUUC1rx/oMiEzu5XVG7WPTmrq5TnzJwf6/bLKppHir2UHmesLdr7MdGwoCikyB3tON8sZzJBd+uyx+nh+YoOS33MH7ofMFIIFG+3QTFoc=) 2025-04-10 00:27:04.722919 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGL1VLHCdl8c/3uroHku38VSaoA+Hxy4VrKl5HE1g22+) 2025-04-10 00:27:04.723954 | orchestrator | 2025-04-10 00:27:04.725100 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:27:04.726207 | orchestrator | Thursday 10 April 2025 00:27:04 +0000 (0:00:01.137) 0:00:25.760 ******** 2025-04-10 00:27:05.823485 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBN333fqWC6TZRX2ECMF+Cf2j0x7fOhz7NtZIylXB37CJ2hP4hYylqXHNiQ3lCN/RqVBIKxgwbUaGFKYoTHq4+A=) 2025-04-10 00:27:05.824469 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB6iEalmWrOQCE4bW/93UlP3AuMJtj7Fcq5qbqFC1/cLJDvjY/07hZjV+UpD8iDZhvpWhGG/o/KAOukJbqiSrx2DD5edsdrKIJbLLClNqzN7FaezAwAEu7HGaXtogCSA7ALCpmUU+HHLBZ5YjdtXO+6fjbpg9ZKcocfDAKSwj3nWWb2QEoBlNEy1xEu6EOOL27rYtahAmL145v5d1IiXdDdA/tmG7azCk+bzdkLL3ydlwoqsAY0GqdiR/b2mh9kYJ/JTUb8osRYrPPHkwyNf16vsdMCEJCs8kaNDyv653wXrXOFxec3hiRz8FEC2gq/TqNK3SRuJM1J4XlDWbZEzX1S1jbMjQJW4kJONDkTgSnK2OwpP7ccJM4mJyDti3DP1dMytQRZ3bXkMsUHOUIHMMwMM5Kxx1Mgoq9yDtfM2doCcZpJKz5XlleDmBduN4Jufhi/BLoPZ1IjAb3uAaNITd3rU7/Hos3psj/FCwRvPDHgJlfDGilZNYf9+NvF359WUs=) 2025-04-10 00:27:05.824688 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAPO0p3Zm2Rm7JBx3K5KIIYgmaLqMo/2BzcJK5Yu2Tp+) 2025-04-10 00:27:05.825642 | orchestrator | 2025-04-10 00:27:05.826805 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-10 00:27:05.827225 | orchestrator | Thursday 10 April 2025 00:27:05 +0000 (0:00:01.101) 0:00:26.861 ******** 2025-04-10 00:27:06.997594 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIYY7Gvm3o3YPsSGNOrRBHOqK9TrKyLLBmoEm3eDYjMv) 2025-04-10 00:27:06.998453 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Q6fMijHMPPlZTGrB0SToLVvmhzc7CSCGuckEOai3Ox//lEtJ+nyRhuoXgLLbCW1sXuo13EQBTeH/wAZhNiJGM5fx8J6hFVj+Gwwfg7+FjdeCUNsTENTVDoS1XrqgcI76UmUhOauo/aE8+TmxisjgmUebJBPLE7PLcb1NqhrckLBmABhU12ZEJ4wlS7d1wIZMCLb7IGLv7LGwqdhwz8r6wxlSOTDYsvElBx8bFe86GKSVjLkJC90ifK5pxmFE7c8FsjTa2oMjrhHRSxN2g8JlXBRyxoP/AOhyAkGLXn6NB2r3pAPHWe6MzB0fN6nOX/ZhEhqDKXxuPzfO6eDiSaAWVVfyQQDxTP5rDanrUGQbXaQaXgDwhsv3mR0i9OtWFCX5R5OkVk1PLHL/wkCZKrFUKNB7WbBEmnRMIkzunKHv+a1qKJ6Kwpe92+VjL2qgXntZMcEwwjVnytyLRfdXst7hkS38kGfIzQBSjX3u+QSpoKtt+fnlywqNMulfflNjPEs=) 2025-04-10 00:27:06.998527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCp7ui2NkziVzxkI6LHyZiAhrfV/JrpjVnI9hUzCF5XLE3zLV48TDidY5vdVoDqxQ+qXpBUliGc5E5Zv6CVnUXg=) 2025-04-10 00:27:06.998542 | orchestrator | 2025-04-10 00:27:06.998990 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-10 00:27:06.999004 | orchestrator | Thursday 10 April 2025 00:27:06 +0000 (0:00:01.172) 0:00:28.033 ******** 2025-04-10 00:27:07.189788 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-10 00:27:07.190455 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-10 00:27:07.190505 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-10 00:27:07.191092 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-10 00:27:07.191843 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-10 00:27:07.192305 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-10 00:27:07.192766 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-10 00:27:07.193212 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:27:07.193853 | orchestrator | 2025-04-10 00:27:07.194120 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-10 00:27:07.194447 | orchestrator | Thursday 10 April 2025 00:27:07 +0000 (0:00:00.196) 0:00:28.230 ******** 2025-04-10 00:27:07.252539 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:27:07.252997 | orchestrator | 2025-04-10 00:27:07.253037 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-10 00:27:07.253340 | orchestrator | Thursday 10 April 2025 00:27:07 +0000 (0:00:00.063) 0:00:28.293 ******** 2025-04-10 00:27:07.309523 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:27:07.309879 | orchestrator | 2025-04-10 00:27:07.310106 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-10 00:27:07.310701 | orchestrator | Thursday 10 April 2025 00:27:07 +0000 (0:00:00.055) 0:00:28.349 ******** 2025-04-10 00:27:08.092389 | orchestrator | changed: [testbed-manager] 2025-04-10 00:27:08.093166 | orchestrator | 2025-04-10 00:27:08.093211 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:27:08.094208 | orchestrator | 2025-04-10 00:27:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:27:08.094631 | orchestrator | 2025-04-10 00:27:08 | INFO  | Please wait and do not abort execution. 2025-04-10 00:27:08.094663 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:27:08.095463 | orchestrator | 2025-04-10 00:27:08.096199 | orchestrator | Thursday 10 April 2025 00:27:08 +0000 (0:00:00.782) 0:00:29.131 ******** 2025-04-10 00:27:08.097450 | orchestrator | =============================================================================== 2025-04-10 00:27:08.097654 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2025-04-10 00:27:08.098432 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.46s 2025-04-10 00:27:08.098849 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2025-04-10 00:27:08.099625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-04-10 00:27:08.100223 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-10 00:27:08.100864 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-04-10 00:27:08.101313 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-04-10 00:27:08.101639 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-10 00:27:08.102383 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-04-10 00:27:08.102606 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-04-10 00:27:08.102848 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-04-10 00:27:08.103636 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-04-10 00:27:08.104092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-04-10 00:27:08.104302 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-04-10 00:27:08.104580 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-04-10 00:27:08.105323 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-04-10 00:27:08.105865 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.78s 2025-04-10 00:27:08.105899 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2025-04-10 00:27:08.106252 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-04-10 00:27:08.106866 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-04-10 00:27:08.508922 | orchestrator | + osism apply squid 2025-04-10 00:27:10.048761 | orchestrator | 2025-04-10 00:27:10 | INFO  | Task 76f23599-71bf-4752-b003-c6c14b74fb10 (squid) was prepared for execution. 2025-04-10 00:27:13.419860 | orchestrator | 2025-04-10 00:27:10 | INFO  | It takes a moment until task 76f23599-71bf-4752-b003-c6c14b74fb10 (squid) has been started and output is visible here. 2025-04-10 00:27:13.420021 | orchestrator | 2025-04-10 00:27:13.420289 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-10 00:27:13.420959 | orchestrator | 2025-04-10 00:27:13.421177 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-10 00:27:13.421209 | orchestrator | Thursday 10 April 2025 00:27:13 +0000 (0:00:00.119) 0:00:00.119 ******** 2025-04-10 00:27:13.529217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-10 00:27:13.531205 | orchestrator | 2025-04-10 00:27:13.531342 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-10 00:27:15.040432 | orchestrator | Thursday 10 April 2025 00:27:13 +0000 (0:00:00.111) 0:00:00.230 ******** 2025-04-10 00:27:15.040581 | orchestrator | ok: [testbed-manager] 2025-04-10 00:27:15.040853 | orchestrator | 2025-04-10 00:27:15.041487 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-10 00:27:15.041999 | orchestrator | Thursday 10 April 2025 00:27:15 +0000 (0:00:01.508) 0:00:01.739 ******** 2025-04-10 00:27:16.259547 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-10 00:27:16.260775 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-10 00:27:16.261099 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-10 00:27:16.261132 | orchestrator | 2025-04-10 00:27:16.262200 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-10 00:27:16.262503 | orchestrator | Thursday 10 April 2025 00:27:16 +0000 (0:00:01.219) 0:00:02.959 ******** 2025-04-10 00:27:17.413844 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-10 00:27:17.416347 | orchestrator | 2025-04-10 00:27:17.417487 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-10 00:27:17.418615 | orchestrator | Thursday 10 April 2025 00:27:17 +0000 (0:00:01.155) 0:00:04.114 ******** 2025-04-10 00:27:17.793654 | orchestrator | ok: [testbed-manager] 2025-04-10 00:27:17.795390 | orchestrator | 2025-04-10 00:27:17.795631 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-10 00:27:17.796564 | orchestrator | Thursday 10 April 2025 00:27:17 +0000 (0:00:00.379) 0:00:04.493 ******** 2025-04-10 00:27:18.788314 | orchestrator | changed: [testbed-manager] 2025-04-10 00:27:18.788940 | orchestrator | 2025-04-10 00:27:18.788995 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-10 00:27:18.789528 | orchestrator | Thursday 10 April 2025 00:27:18 +0000 (0:00:00.994) 0:00:05.488 ******** 2025-04-10 00:27:50.708079 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-10 00:28:03.239084 | orchestrator | ok: [testbed-manager] 2025-04-10 00:28:03.239285 | orchestrator | 2025-04-10 00:28:03.239308 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-10 00:28:03.239325 | orchestrator | Thursday 10 April 2025 00:27:50 +0000 (0:00:31.915) 0:00:37.403 ******** 2025-04-10 00:28:03.239355 | orchestrator | changed: [testbed-manager] 2025-04-10 00:28:03.244026 | orchestrator | 2025-04-10 00:28:03.244060 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-10 00:28:03.244082 | orchestrator | Thursday 10 April 2025 00:28:03 +0000 (0:00:12.534) 0:00:49.937 ******** 2025-04-10 00:29:03.324545 | orchestrator | Pausing for 60 seconds 2025-04-10 00:29:03.383590 | orchestrator | changed: [testbed-manager] 2025-04-10 00:29:03.383698 | orchestrator | 2025-04-10 00:29:03.383718 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-10 00:29:03.383734 | orchestrator | Thursday 10 April 2025 00:29:03 +0000 (0:01:00.079) 0:01:50.017 ******** 2025-04-10 00:29:03.383764 | orchestrator | ok: [testbed-manager] 2025-04-10 00:29:03.384458 | orchestrator | 2025-04-10 00:29:03.385696 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-10 00:29:03.387296 | orchestrator | Thursday 10 April 2025 00:29:03 +0000 (0:00:00.066) 0:01:50.083 ******** 2025-04-10 00:29:04.013690 | orchestrator | changed: [testbed-manager] 2025-04-10 00:29:04.014885 | orchestrator | 2025-04-10 00:29:04.016979 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:29:04.017014 | orchestrator | 2025-04-10 00:29:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:29:04.017030 | orchestrator | 2025-04-10 00:29:04 | INFO  | Please wait and do not abort execution. 2025-04-10 00:29:04.017051 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:29:04.018124 | orchestrator | 2025-04-10 00:29:04.019007 | orchestrator | Thursday 10 April 2025 00:29:04 +0000 (0:00:00.629) 0:01:50.713 ******** 2025-04-10 00:29:04.019739 | orchestrator | =============================================================================== 2025-04-10 00:29:04.020448 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-04-10 00:29:04.021300 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.92s 2025-04-10 00:29:04.022099 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.53s 2025-04-10 00:29:04.022380 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.51s 2025-04-10 00:29:04.023000 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2025-04-10 00:29:04.023502 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2025-04-10 00:29:04.023937 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.99s 2025-04-10 00:29:04.024884 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-04-10 00:29:04.025566 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-04-10 00:29:04.026334 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2025-04-10 00:29:04.026927 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-04-10 00:29:04.514339 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-10 00:29:04.521265 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-04-10 00:29:04.521393 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-10 00:29:04.585057 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-10 00:29:04.590847 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-10 00:29:04.590885 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-04-10 00:29:04.590911 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-10 00:29:04.597621 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-10 00:29:04.603303 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-10 00:29:06.077820 | orchestrator | 2025-04-10 00:29:06 | INFO  | Task 09758f5f-9e0c-4f9a-8e38-9422505ce5ad (operator) was prepared for execution. 2025-04-10 00:29:09.192294 | orchestrator | 2025-04-10 00:29:06 | INFO  | It takes a moment until task 09758f5f-9e0c-4f9a-8e38-9422505ce5ad (operator) has been started and output is visible here. 2025-04-10 00:29:09.192415 | orchestrator | 2025-04-10 00:29:09.193396 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-10 00:29:09.197186 | orchestrator | 2025-04-10 00:29:09.197468 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-10 00:29:09.199414 | orchestrator | Thursday 10 April 2025 00:29:09 +0000 (0:00:00.099) 0:00:00.099 ******** 2025-04-10 00:29:13.636224 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:29:13.636390 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:13.636615 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:29:13.637349 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:13.637866 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:13.637969 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:29:13.637999 | orchestrator | 2025-04-10 00:29:13.642569 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-10 00:29:14.464470 | orchestrator | Thursday 10 April 2025 00:29:13 +0000 (0:00:04.443) 0:00:04.543 ******** 2025-04-10 00:29:14.464639 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:29:14.464725 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:29:14.465088 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:14.469257 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:14.469369 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:14.469410 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:29:14.469422 | orchestrator | 2025-04-10 00:29:14.469435 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-10 00:29:14.469450 | orchestrator | 2025-04-10 00:29:14.469686 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-10 00:29:14.469712 | orchestrator | Thursday 10 April 2025 00:29:14 +0000 (0:00:00.830) 0:00:05.373 ******** 2025-04-10 00:29:14.531697 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:29:14.558190 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:29:14.574762 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:29:14.622754 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:14.623235 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:14.623948 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:14.624239 | orchestrator | 2025-04-10 00:29:14.624930 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-10 00:29:14.626355 | orchestrator | Thursday 10 April 2025 00:29:14 +0000 (0:00:00.159) 0:00:05.533 ******** 2025-04-10 00:29:14.695773 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:29:14.721510 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:29:14.747169 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:29:14.815217 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:14.816263 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:14.818263 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:14.820401 | orchestrator | 2025-04-10 00:29:14.822097 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-10 00:29:14.822834 | orchestrator | Thursday 10 April 2025 00:29:14 +0000 (0:00:00.190) 0:00:05.723 ******** 2025-04-10 00:29:15.538005 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:15.538363 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:15.538395 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:15.538461 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:15.539002 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:15.539374 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:15.539724 | orchestrator | 2025-04-10 00:29:15.540249 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-10 00:29:15.540530 | orchestrator | Thursday 10 April 2025 00:29:15 +0000 (0:00:00.720) 0:00:06.444 ******** 2025-04-10 00:29:16.320773 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:16.323994 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:16.324431 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:16.324461 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:16.324499 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:16.324515 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:16.324535 | orchestrator | 2025-04-10 00:29:16.325001 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-10 00:29:16.325495 | orchestrator | Thursday 10 April 2025 00:29:16 +0000 (0:00:00.783) 0:00:07.228 ******** 2025-04-10 00:29:17.509694 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-10 00:29:17.510215 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-10 00:29:17.515034 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-10 00:29:17.516036 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-10 00:29:17.517987 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-10 00:29:17.520808 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-10 00:29:17.520893 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-10 00:29:17.522605 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-10 00:29:17.522868 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-10 00:29:17.523589 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-10 00:29:17.525424 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-10 00:29:17.527047 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-10 00:29:17.530116 | orchestrator | 2025-04-10 00:29:17.531246 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-10 00:29:17.532169 | orchestrator | Thursday 10 April 2025 00:29:17 +0000 (0:00:01.187) 0:00:08.415 ******** 2025-04-10 00:29:18.916618 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:18.917085 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:18.917455 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:18.920469 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:18.923254 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:20.124379 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:20.124511 | orchestrator | 2025-04-10 00:29:20.124535 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-10 00:29:20.124551 | orchestrator | Thursday 10 April 2025 00:29:18 +0000 (0:00:01.407) 0:00:09.823 ******** 2025-04-10 00:29:20.124583 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-10 00:29:20.124950 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-10 00:29:20.125481 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-10 00:29:20.229837 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:29:20.230284 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:29:20.230325 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:29:20.232320 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:29:20.233345 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:29:20.233375 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-10 00:29:20.234618 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-10 00:29:20.236066 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-10 00:29:20.237183 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-10 00:29:20.238454 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-10 00:29:20.238778 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-10 00:29:20.240710 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-10 00:29:20.241127 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:29:20.241195 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:29:20.241218 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:29:20.241759 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:29:20.243077 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:29:20.243418 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-10 00:29:20.244253 | orchestrator | 2025-04-10 00:29:20.244731 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-10 00:29:20.245392 | orchestrator | Thursday 10 April 2025 00:29:20 +0000 (0:00:01.314) 0:00:11.137 ******** 2025-04-10 00:29:20.845110 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:20.846172 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:20.848645 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:20.850454 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:20.852387 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:20.853677 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:20.854562 | orchestrator | 2025-04-10 00:29:20.855639 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-10 00:29:20.856767 | orchestrator | Thursday 10 April 2025 00:29:20 +0000 (0:00:00.612) 0:00:11.749 ******** 2025-04-10 00:29:20.911901 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:29:20.939518 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:29:20.958465 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:29:21.019338 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:21.019737 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:21.020669 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:21.021100 | orchestrator | 2025-04-10 00:29:21.021400 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-10 00:29:21.021930 | orchestrator | Thursday 10 April 2025 00:29:21 +0000 (0:00:00.178) 0:00:11.928 ******** 2025-04-10 00:29:21.736245 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-10 00:29:21.736716 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:21.736758 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-10 00:29:21.737317 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-10 00:29:21.737846 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-10 00:29:21.738316 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:21.738679 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:21.739102 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:21.739522 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-10 00:29:21.739841 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:21.740226 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 00:29:21.740549 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:21.740969 | orchestrator | 2025-04-10 00:29:21.741342 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-10 00:29:21.741803 | orchestrator | Thursday 10 April 2025 00:29:21 +0000 (0:00:00.716) 0:00:12.645 ******** 2025-04-10 00:29:21.784215 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:29:21.805565 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:29:21.831024 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:29:21.853875 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:21.900365 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:21.904085 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:21.951823 | orchestrator | 2025-04-10 00:29:21.951879 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-10 00:29:21.951897 | orchestrator | Thursday 10 April 2025 00:29:21 +0000 (0:00:00.164) 0:00:12.809 ******** 2025-04-10 00:29:21.951922 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:29:21.986732 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:29:22.015986 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:29:22.038872 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:22.079433 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:22.079699 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:22.080742 | orchestrator | 2025-04-10 00:29:22.081025 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-10 00:29:22.082760 | orchestrator | Thursday 10 April 2025 00:29:22 +0000 (0:00:00.179) 0:00:12.989 ******** 2025-04-10 00:29:22.146900 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:29:22.170119 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:29:22.193291 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:29:22.235182 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:22.236171 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:22.237436 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:22.237644 | orchestrator | 2025-04-10 00:29:22.237781 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-10 00:29:22.238441 | orchestrator | Thursday 10 April 2025 00:29:22 +0000 (0:00:00.156) 0:00:13.145 ******** 2025-04-10 00:29:22.908699 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:22.909522 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:22.910420 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:22.911083 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:22.914527 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:22.986470 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:22.986608 | orchestrator | 2025-04-10 00:29:22.986628 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-10 00:29:22.986645 | orchestrator | Thursday 10 April 2025 00:29:22 +0000 (0:00:00.672) 0:00:13.817 ******** 2025-04-10 00:29:22.986675 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:29:23.032502 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:29:23.139075 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:29:23.139862 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:23.140566 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:23.141995 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:23.142547 | orchestrator | 2025-04-10 00:29:23.144183 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:29:23.145542 | orchestrator | 2025-04-10 00:29:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:29:23.145990 | orchestrator | 2025-04-10 00:29:23 | INFO  | Please wait and do not abort execution. 2025-04-10 00:29:23.146005 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:29:23.147538 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:29:23.148371 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:29:23.148871 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:29:23.149335 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:29:23.150169 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:29:23.151256 | orchestrator | 2025-04-10 00:29:23.151866 | orchestrator | Thursday 10 April 2025 00:29:23 +0000 (0:00:00.231) 0:00:14.049 ******** 2025-04-10 00:29:23.152282 | orchestrator | =============================================================================== 2025-04-10 00:29:23.153127 | orchestrator | Gathering Facts --------------------------------------------------------- 4.44s 2025-04-10 00:29:23.154432 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.41s 2025-04-10 00:29:23.154680 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2025-04-10 00:29:23.155965 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-04-10 00:29:23.156915 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2025-04-10 00:29:23.157654 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2025-04-10 00:29:23.158523 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.72s 2025-04-10 00:29:23.158996 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-04-10 00:29:23.160328 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-04-10 00:29:23.160575 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2025-04-10 00:29:23.160599 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-04-10 00:29:23.160618 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-04-10 00:29:23.161360 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-04-10 00:29:23.161975 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-04-10 00:29:23.162548 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-04-10 00:29:23.162850 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-04-10 00:29:23.163350 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-04-10 00:29:23.611826 | orchestrator | + osism apply --environment custom facts 2025-04-10 00:29:25.003622 | orchestrator | 2025-04-10 00:29:25 | INFO  | Trying to run play facts in environment custom 2025-04-10 00:29:25.058523 | orchestrator | 2025-04-10 00:29:25 | INFO  | Task 8c27bbcd-5535-42e2-bcf8-5576cb5f15d3 (facts) was prepared for execution. 2025-04-10 00:29:28.294117 | orchestrator | 2025-04-10 00:29:25 | INFO  | It takes a moment until task 8c27bbcd-5535-42e2-bcf8-5576cb5f15d3 (facts) has been started and output is visible here. 2025-04-10 00:29:28.294302 | orchestrator | 2025-04-10 00:29:28.294625 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-10 00:29:28.295829 | orchestrator | 2025-04-10 00:29:28.296421 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-10 00:29:28.296632 | orchestrator | Thursday 10 April 2025 00:29:28 +0000 (0:00:00.084) 0:00:00.084 ******** 2025-04-10 00:29:29.553564 | orchestrator | ok: [testbed-manager] 2025-04-10 00:29:30.598639 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:30.598814 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:30.599590 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:30.600552 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:30.601161 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:30.604094 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:30.604122 | orchestrator | 2025-04-10 00:29:30.604177 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-10 00:29:31.813311 | orchestrator | Thursday 10 April 2025 00:29:30 +0000 (0:00:02.307) 0:00:02.392 ******** 2025-04-10 00:29:31.813454 | orchestrator | ok: [testbed-manager] 2025-04-10 00:29:32.752778 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:32.753002 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:32.753977 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:32.755654 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:29:32.756325 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:29:32.756929 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:29:32.757671 | orchestrator | 2025-04-10 00:29:32.757853 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-10 00:29:32.758225 | orchestrator | 2025-04-10 00:29:32.758474 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-10 00:29:32.758859 | orchestrator | Thursday 10 April 2025 00:29:32 +0000 (0:00:02.150) 0:00:04.543 ******** 2025-04-10 00:29:32.870469 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:32.870635 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:32.870677 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:32.871011 | orchestrator | 2025-04-10 00:29:32.872007 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-10 00:29:33.024620 | orchestrator | Thursday 10 April 2025 00:29:32 +0000 (0:00:00.120) 0:00:04.664 ******** 2025-04-10 00:29:33.024746 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:33.025239 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:33.025277 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:33.025684 | orchestrator | 2025-04-10 00:29:33.025972 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-10 00:29:33.028679 | orchestrator | Thursday 10 April 2025 00:29:33 +0000 (0:00:00.153) 0:00:04.818 ******** 2025-04-10 00:29:33.162561 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:33.162756 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:33.163698 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:33.163732 | orchestrator | 2025-04-10 00:29:33.164525 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-10 00:29:33.164557 | orchestrator | Thursday 10 April 2025 00:29:33 +0000 (0:00:00.138) 0:00:04.956 ******** 2025-04-10 00:29:33.311435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:29:33.311629 | orchestrator | 2025-04-10 00:29:33.311869 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-10 00:29:33.312381 | orchestrator | Thursday 10 April 2025 00:29:33 +0000 (0:00:00.148) 0:00:05.105 ******** 2025-04-10 00:29:33.770601 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:33.770773 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:33.770796 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:33.770810 | orchestrator | 2025-04-10 00:29:33.770831 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-10 00:29:33.772006 | orchestrator | Thursday 10 April 2025 00:29:33 +0000 (0:00:00.451) 0:00:05.557 ******** 2025-04-10 00:29:33.898582 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:33.899881 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:33.899912 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:33.899933 | orchestrator | 2025-04-10 00:29:33.900253 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-10 00:29:33.901587 | orchestrator | Thursday 10 April 2025 00:29:33 +0000 (0:00:00.134) 0:00:05.691 ******** 2025-04-10 00:29:34.924924 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:34.925415 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:34.926175 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:34.929047 | orchestrator | 2025-04-10 00:29:34.933859 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-10 00:29:34.933963 | orchestrator | Thursday 10 April 2025 00:29:34 +0000 (0:00:01.025) 0:00:06.717 ******** 2025-04-10 00:29:35.389456 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:35.389618 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:35.389644 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:35.397340 | orchestrator | 2025-04-10 00:29:35.397376 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-10 00:29:36.491621 | orchestrator | Thursday 10 April 2025 00:29:35 +0000 (0:00:00.462) 0:00:07.179 ******** 2025-04-10 00:29:36.491772 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:36.492311 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:36.492467 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:36.492842 | orchestrator | 2025-04-10 00:29:36.493724 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-10 00:29:36.494322 | orchestrator | Thursday 10 April 2025 00:29:36 +0000 (0:00:01.102) 0:00:08.282 ******** 2025-04-10 00:29:50.049281 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:50.141586 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:50.209161 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:50.209239 | orchestrator | 2025-04-10 00:29:50.209257 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-10 00:29:50.209274 | orchestrator | Thursday 10 April 2025 00:29:50 +0000 (0:00:13.549) 0:00:21.832 ******** 2025-04-10 00:29:50.209330 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:29:57.311103 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:29:57.311281 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:29:57.311323 | orchestrator | 2025-04-10 00:29:57.311341 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-10 00:29:57.311357 | orchestrator | Thursday 10 April 2025 00:29:50 +0000 (0:00:00.101) 0:00:21.933 ******** 2025-04-10 00:29:57.311389 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:29:57.312251 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:29:57.312841 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:29:57.313816 | orchestrator | 2025-04-10 00:29:57.314638 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-10 00:29:57.315058 | orchestrator | Thursday 10 April 2025 00:29:57 +0000 (0:00:07.167) 0:00:29.100 ******** 2025-04-10 00:29:57.760357 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:29:57.761203 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:29:57.761241 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:29:57.762072 | orchestrator | 2025-04-10 00:29:57.762101 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-10 00:30:01.320812 | orchestrator | Thursday 10 April 2025 00:29:57 +0000 (0:00:00.450) 0:00:29.551 ******** 2025-04-10 00:30:01.320989 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-10 00:30:01.321162 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-10 00:30:01.321195 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-10 00:30:01.322290 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-10 00:30:01.323419 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-10 00:30:01.324425 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-10 00:30:01.325114 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-10 00:30:01.325863 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-10 00:30:01.326783 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-10 00:30:01.327224 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-10 00:30:01.327780 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-10 00:30:01.328644 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-10 00:30:01.328900 | orchestrator | 2025-04-10 00:30:01.329370 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-10 00:30:01.329688 | orchestrator | Thursday 10 April 2025 00:30:01 +0000 (0:00:03.558) 0:00:33.109 ******** 2025-04-10 00:30:02.377175 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:02.377643 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:02.379722 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:02.380968 | orchestrator | 2025-04-10 00:30:02.382219 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-10 00:30:02.382741 | orchestrator | 2025-04-10 00:30:02.383788 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-10 00:30:02.384689 | orchestrator | Thursday 10 April 2025 00:30:02 +0000 (0:00:01.057) 0:00:34.166 ******** 2025-04-10 00:30:04.127639 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:07.397952 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:07.400696 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:07.400741 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:07.402231 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:07.402715 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:07.403621 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:07.404356 | orchestrator | 2025-04-10 00:30:07.405145 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:30:07.406352 | orchestrator | 2025-04-10 00:30:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:30:07.406540 | orchestrator | 2025-04-10 00:30:07 | INFO  | Please wait and do not abort execution. 2025-04-10 00:30:07.406570 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:30:07.407339 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:30:07.408669 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:30:07.410492 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:30:07.410579 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:30:07.411501 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:30:07.412573 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:30:07.413471 | orchestrator | 2025-04-10 00:30:07.414112 | orchestrator | Thursday 10 April 2025 00:30:07 +0000 (0:00:05.022) 0:00:39.190 ******** 2025-04-10 00:30:07.414556 | orchestrator | =============================================================================== 2025-04-10 00:30:07.415446 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.55s 2025-04-10 00:30:07.415796 | orchestrator | Install required packages (Debian) -------------------------------------- 7.17s 2025-04-10 00:30:07.416473 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.02s 2025-04-10 00:30:07.416793 | orchestrator | Copy fact files --------------------------------------------------------- 3.56s 2025-04-10 00:30:07.417516 | orchestrator | Create custom facts directory ------------------------------------------- 2.31s 2025-04-10 00:30:07.417826 | orchestrator | Copy fact file ---------------------------------------------------------- 2.15s 2025-04-10 00:30:07.418230 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2025-04-10 00:30:07.418689 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.06s 2025-04-10 00:30:07.419291 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-04-10 00:30:07.420110 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-04-10 00:30:07.420670 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-04-10 00:30:07.421317 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-04-10 00:30:07.421637 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-04-10 00:30:07.422116 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-04-10 00:30:07.422449 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-04-10 00:30:07.422755 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-04-10 00:30:07.423010 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-04-10 00:30:07.423336 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-04-10 00:30:07.914624 | orchestrator | + osism apply bootstrap 2025-04-10 00:30:09.356288 | orchestrator | 2025-04-10 00:30:09 | INFO  | Task 83567e15-7c8c-431b-91fb-d567e542e7a6 (bootstrap) was prepared for execution. 2025-04-10 00:30:12.678908 | orchestrator | 2025-04-10 00:30:09 | INFO  | It takes a moment until task 83567e15-7c8c-431b-91fb-d567e542e7a6 (bootstrap) has been started and output is visible here. 2025-04-10 00:30:12.679076 | orchestrator | 2025-04-10 00:30:12.680516 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-10 00:30:12.680643 | orchestrator | 2025-04-10 00:30:12.682505 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-10 00:30:12.683157 | orchestrator | Thursday 10 April 2025 00:30:12 +0000 (0:00:00.109) 0:00:00.109 ******** 2025-04-10 00:30:12.775698 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:12.807501 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:12.836370 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:12.877560 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:12.962182 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:12.962799 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:12.964346 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:12.965101 | orchestrator | 2025-04-10 00:30:12.965896 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-10 00:30:12.966442 | orchestrator | 2025-04-10 00:30:12.967646 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-10 00:30:12.968097 | orchestrator | Thursday 10 April 2025 00:30:12 +0000 (0:00:00.286) 0:00:00.396 ******** 2025-04-10 00:30:17.149434 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:17.150176 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:17.154392 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:17.154776 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:17.155077 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:17.156221 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:17.157170 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:17.157417 | orchestrator | 2025-04-10 00:30:17.158165 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-10 00:30:17.158698 | orchestrator | 2025-04-10 00:30:17.159207 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-10 00:30:17.159745 | orchestrator | Thursday 10 April 2025 00:30:17 +0000 (0:00:04.187) 0:00:04.583 ******** 2025-04-10 00:30:17.257554 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-10 00:30:17.257757 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-10 00:30:17.258352 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-10 00:30:17.298879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-10 00:30:17.302065 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-10 00:30:17.302145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 00:30:17.303294 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-10 00:30:17.303793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 00:30:17.304356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-10 00:30:17.305427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 00:30:17.355924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 00:30:17.358290 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-10 00:30:17.622723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-10 00:30:17.622847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 00:30:17.622867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 00:30:17.622881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-10 00:30:17.622912 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:17.624304 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:17.624690 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 00:30:17.625612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-10 00:30:17.627020 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 00:30:17.627103 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 00:30:17.627929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 00:30:17.628934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 00:30:17.629717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 00:30:17.630395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 00:30:17.631354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 00:30:17.632417 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-10 00:30:17.633325 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-10 00:30:17.634934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 00:30:17.635381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 00:30:17.636109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-10 00:30:17.637353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 00:30:17.637956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 00:30:17.638328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 00:30:17.639440 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:17.640208 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-10 00:30:17.640807 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-10 00:30:17.641656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 00:30:17.642590 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 00:30:17.643087 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-10 00:30:17.644338 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-10 00:30:17.645366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 00:30:17.645395 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:17.645667 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 00:30:17.646545 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-10 00:30:17.647040 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-10 00:30:17.648062 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 00:30:17.648530 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-10 00:30:17.649113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-10 00:30:17.649606 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:17.650152 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-10 00:30:17.650779 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:17.651330 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-10 00:30:17.652647 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-10 00:30:17.653303 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:17.653609 | orchestrator | 2025-04-10 00:30:17.654282 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-10 00:30:17.654870 | orchestrator | 2025-04-10 00:30:17.655334 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-04-10 00:30:17.655762 | orchestrator | Thursday 10 April 2025 00:30:17 +0000 (0:00:00.471) 0:00:05.055 ******** 2025-04-10 00:30:17.692858 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:17.755384 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:17.780150 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:17.811033 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:17.865764 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:17.866981 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:17.868423 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:17.869835 | orchestrator | 2025-04-10 00:30:17.871085 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-10 00:30:17.872365 | orchestrator | Thursday 10 April 2025 00:30:17 +0000 (0:00:00.244) 0:00:05.300 ******** 2025-04-10 00:30:19.118072 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:19.118435 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:19.118461 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:19.119605 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:19.120250 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:19.121012 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:19.123605 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:19.124426 | orchestrator | 2025-04-10 00:30:19.125477 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-10 00:30:19.126989 | orchestrator | Thursday 10 April 2025 00:30:19 +0000 (0:00:01.252) 0:00:06.552 ******** 2025-04-10 00:30:20.408052 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:20.408474 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:20.409716 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:20.410845 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:20.411980 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:20.412756 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:20.413597 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:20.414719 | orchestrator | 2025-04-10 00:30:20.415400 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-10 00:30:20.416149 | orchestrator | Thursday 10 April 2025 00:30:20 +0000 (0:00:01.287) 0:00:07.839 ******** 2025-04-10 00:30:20.709189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:20.709686 | orchestrator | 2025-04-10 00:30:20.709927 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-10 00:30:20.710845 | orchestrator | Thursday 10 April 2025 00:30:20 +0000 (0:00:00.298) 0:00:08.138 ******** 2025-04-10 00:30:22.876063 | orchestrator | changed: [testbed-manager] 2025-04-10 00:30:22.876299 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:22.879333 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:22.879743 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:22.880324 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:22.880368 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:22.880391 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:22.880413 | orchestrator | 2025-04-10 00:30:22.880445 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-10 00:30:22.881319 | orchestrator | Thursday 10 April 2025 00:30:22 +0000 (0:00:02.170) 0:00:10.308 ******** 2025-04-10 00:30:22.956446 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:23.177207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:23.177394 | orchestrator | 2025-04-10 00:30:23.177591 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-10 00:30:23.178175 | orchestrator | Thursday 10 April 2025 00:30:23 +0000 (0:00:00.302) 0:00:10.611 ******** 2025-04-10 00:30:24.296069 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:24.297299 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:24.300725 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:24.301384 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:24.301414 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:24.301435 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:24.301852 | orchestrator | 2025-04-10 00:30:24.303189 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-10 00:30:24.303871 | orchestrator | Thursday 10 April 2025 00:30:24 +0000 (0:00:01.117) 0:00:11.728 ******** 2025-04-10 00:30:24.366538 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:24.916192 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:24.916539 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:24.918530 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:24.919010 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:24.920273 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:24.920936 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:24.922276 | orchestrator | 2025-04-10 00:30:24.922680 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-10 00:30:24.923766 | orchestrator | Thursday 10 April 2025 00:30:24 +0000 (0:00:00.621) 0:00:12.349 ******** 2025-04-10 00:30:25.015730 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:25.040174 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:25.073869 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:25.395095 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:25.395350 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:25.400421 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:25.400618 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:25.400645 | orchestrator | 2025-04-10 00:30:25.400662 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-10 00:30:25.400682 | orchestrator | Thursday 10 April 2025 00:30:25 +0000 (0:00:00.478) 0:00:12.828 ******** 2025-04-10 00:30:25.479618 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:25.513330 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:25.537772 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:25.562564 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:25.633999 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:25.634772 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:25.635236 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:25.635919 | orchestrator | 2025-04-10 00:30:25.637484 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-10 00:30:25.946636 | orchestrator | Thursday 10 April 2025 00:30:25 +0000 (0:00:00.239) 0:00:13.068 ******** 2025-04-10 00:30:25.946775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:25.947538 | orchestrator | 2025-04-10 00:30:25.947578 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-10 00:30:25.948816 | orchestrator | Thursday 10 April 2025 00:30:25 +0000 (0:00:00.307) 0:00:13.375 ******** 2025-04-10 00:30:26.288680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:26.288903 | orchestrator | 2025-04-10 00:30:26.292315 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-10 00:30:26.293084 | orchestrator | Thursday 10 April 2025 00:30:26 +0000 (0:00:00.346) 0:00:13.722 ******** 2025-04-10 00:30:27.461378 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:27.461738 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:27.461781 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:27.462553 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:27.465408 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:27.466007 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:27.466811 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:27.467667 | orchestrator | 2025-04-10 00:30:27.468327 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-10 00:30:27.468889 | orchestrator | Thursday 10 April 2025 00:30:27 +0000 (0:00:01.170) 0:00:14.893 ******** 2025-04-10 00:30:27.559839 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:27.589711 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:27.621314 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:27.644164 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:27.704913 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:27.705880 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:27.706316 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:27.707151 | orchestrator | 2025-04-10 00:30:27.707833 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-10 00:30:27.708295 | orchestrator | Thursday 10 April 2025 00:30:27 +0000 (0:00:00.246) 0:00:15.139 ******** 2025-04-10 00:30:28.283498 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:28.285767 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:28.286550 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:28.286584 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:28.286773 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:28.287824 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:28.288297 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:28.289105 | orchestrator | 2025-04-10 00:30:28.290139 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-10 00:30:28.290742 | orchestrator | Thursday 10 April 2025 00:30:28 +0000 (0:00:00.576) 0:00:15.716 ******** 2025-04-10 00:30:28.372105 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:28.406338 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:28.429386 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:28.453279 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:28.535780 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:28.536562 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:28.537065 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:28.537607 | orchestrator | 2025-04-10 00:30:28.538586 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-10 00:30:28.539316 | orchestrator | Thursday 10 April 2025 00:30:28 +0000 (0:00:00.253) 0:00:15.970 ******** 2025-04-10 00:30:29.079610 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:29.080058 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:29.080161 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:29.080226 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:29.080800 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:29.080891 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:29.081271 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:29.081514 | orchestrator | 2025-04-10 00:30:29.081739 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-10 00:30:29.082928 | orchestrator | Thursday 10 April 2025 00:30:29 +0000 (0:00:00.541) 0:00:16.511 ******** 2025-04-10 00:30:30.285146 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:30.285513 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:30.289809 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:30.289900 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:30.290206 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:30.290831 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:30.291071 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:30.291569 | orchestrator | 2025-04-10 00:30:30.292371 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-10 00:30:30.292936 | orchestrator | Thursday 10 April 2025 00:30:30 +0000 (0:00:01.206) 0:00:17.717 ******** 2025-04-10 00:30:31.438946 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:31.440960 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:31.441214 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:31.441244 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:31.441263 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:31.442225 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:31.442809 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:31.443504 | orchestrator | 2025-04-10 00:30:31.444059 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-10 00:30:31.444690 | orchestrator | Thursday 10 April 2025 00:30:31 +0000 (0:00:01.152) 0:00:18.870 ******** 2025-04-10 00:30:31.762330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:31.763049 | orchestrator | 2025-04-10 00:30:31.764449 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-10 00:30:31.765438 | orchestrator | Thursday 10 April 2025 00:30:31 +0000 (0:00:00.324) 0:00:19.195 ******** 2025-04-10 00:30:31.847826 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:33.295039 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:33.295902 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:33.298688 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:33.299101 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:33.299162 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:33.299841 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:33.300626 | orchestrator | 2025-04-10 00:30:33.301192 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-10 00:30:33.301593 | orchestrator | Thursday 10 April 2025 00:30:33 +0000 (0:00:01.532) 0:00:20.727 ******** 2025-04-10 00:30:33.366636 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:33.416202 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:33.448385 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:33.505901 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:33.506176 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:33.506821 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:33.507018 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:33.507713 | orchestrator | 2025-04-10 00:30:33.508342 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-10 00:30:33.508808 | orchestrator | Thursday 10 April 2025 00:30:33 +0000 (0:00:00.213) 0:00:20.940 ******** 2025-04-10 00:30:33.617590 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:33.645088 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:33.684799 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:33.771316 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:33.772025 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:33.772757 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:33.772857 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:33.773429 | orchestrator | 2025-04-10 00:30:33.773969 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-10 00:30:33.774494 | orchestrator | Thursday 10 April 2025 00:30:33 +0000 (0:00:00.263) 0:00:21.204 ******** 2025-04-10 00:30:33.864103 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:33.898907 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:33.925956 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:33.961934 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:34.023528 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:34.025564 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:34.025607 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:34.026711 | orchestrator | 2025-04-10 00:30:34.026745 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-10 00:30:34.027332 | orchestrator | Thursday 10 April 2025 00:30:34 +0000 (0:00:00.253) 0:00:21.457 ******** 2025-04-10 00:30:34.346917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:34.347152 | orchestrator | 2025-04-10 00:30:34.886005 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-10 00:30:34.886212 | orchestrator | Thursday 10 April 2025 00:30:34 +0000 (0:00:00.321) 0:00:21.779 ******** 2025-04-10 00:30:34.886249 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:34.886325 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:34.886445 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:34.888006 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:34.888578 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:34.889391 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:34.890522 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:34.890593 | orchestrator | 2025-04-10 00:30:34.891844 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-10 00:30:34.892194 | orchestrator | Thursday 10 April 2025 00:30:34 +0000 (0:00:00.540) 0:00:22.320 ******** 2025-04-10 00:30:34.991434 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:35.023323 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:35.049950 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:35.121410 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:35.121667 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:35.121696 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:35.121989 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:35.123305 | orchestrator | 2025-04-10 00:30:35.123415 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-10 00:30:35.125355 | orchestrator | Thursday 10 April 2025 00:30:35 +0000 (0:00:00.234) 0:00:22.555 ******** 2025-04-10 00:30:36.204328 | orchestrator | changed: [testbed-manager] 2025-04-10 00:30:36.208862 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:36.209084 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:36.210598 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:36.211196 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:36.211732 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:36.213218 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:36.213754 | orchestrator | 2025-04-10 00:30:36.214007 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-10 00:30:36.215076 | orchestrator | Thursday 10 April 2025 00:30:36 +0000 (0:00:01.080) 0:00:23.636 ******** 2025-04-10 00:30:36.745865 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:36.746178 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:36.746202 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:36.746815 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:36.747400 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:36.747894 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:36.748362 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:36.749040 | orchestrator | 2025-04-10 00:30:36.749200 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-10 00:30:36.749881 | orchestrator | Thursday 10 April 2025 00:30:36 +0000 (0:00:00.542) 0:00:24.178 ******** 2025-04-10 00:30:37.873714 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:37.874810 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:37.874858 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:37.876428 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:37.876997 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:37.877942 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:37.878996 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:37.880089 | orchestrator | 2025-04-10 00:30:37.881764 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-10 00:30:37.882402 | orchestrator | Thursday 10 April 2025 00:30:37 +0000 (0:00:01.128) 0:00:25.306 ******** 2025-04-10 00:30:52.173619 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:52.175272 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:52.176248 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:52.176278 | orchestrator | changed: [testbed-manager] 2025-04-10 00:30:52.177818 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:52.180046 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:52.180265 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:52.183132 | orchestrator | 2025-04-10 00:30:52.183928 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-10 00:30:52.183975 | orchestrator | Thursday 10 April 2025 00:30:52 +0000 (0:00:14.294) 0:00:39.601 ******** 2025-04-10 00:30:52.275085 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:52.303229 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:52.337480 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:52.357986 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:52.432575 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:52.433057 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:52.433098 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:52.434556 | orchestrator | 2025-04-10 00:30:52.437859 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-10 00:30:52.517782 | orchestrator | Thursday 10 April 2025 00:30:52 +0000 (0:00:00.265) 0:00:39.866 ******** 2025-04-10 00:30:52.517931 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:52.549979 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:52.580804 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:52.610073 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:52.675707 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:52.676174 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:52.681184 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:52.681468 | orchestrator | 2025-04-10 00:30:52.681499 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-10 00:30:52.681521 | orchestrator | Thursday 10 April 2025 00:30:52 +0000 (0:00:00.242) 0:00:40.109 ******** 2025-04-10 00:30:52.754097 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:52.781603 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:52.809200 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:52.834854 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:52.902908 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:52.903797 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:52.907432 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:53.239849 | orchestrator | 2025-04-10 00:30:53.239992 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-10 00:30:53.240047 | orchestrator | Thursday 10 April 2025 00:30:52 +0000 (0:00:00.227) 0:00:40.336 ******** 2025-04-10 00:30:53.240081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:53.242571 | orchestrator | 2025-04-10 00:30:53.242634 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-10 00:30:54.900359 | orchestrator | Thursday 10 April 2025 00:30:53 +0000 (0:00:00.334) 0:00:40.671 ******** 2025-04-10 00:30:54.900506 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:54.900899 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:54.902436 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:54.903506 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:54.903772 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:54.904910 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:54.905663 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:54.906629 | orchestrator | 2025-04-10 00:30:54.907876 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-10 00:30:54.908185 | orchestrator | Thursday 10 April 2025 00:30:54 +0000 (0:00:01.660) 0:00:42.332 ******** 2025-04-10 00:30:55.984568 | orchestrator | changed: [testbed-manager] 2025-04-10 00:30:55.984753 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:55.987005 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:55.988395 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:55.988897 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:55.989463 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:55.990409 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:55.990978 | orchestrator | 2025-04-10 00:30:55.991378 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-10 00:30:55.991912 | orchestrator | Thursday 10 April 2025 00:30:55 +0000 (0:00:01.083) 0:00:43.415 ******** 2025-04-10 00:30:56.822592 | orchestrator | ok: [testbed-manager] 2025-04-10 00:30:56.822766 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:30:56.823546 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:30:56.824137 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:30:56.825481 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:30:56.825877 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:30:56.826983 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:30:56.827026 | orchestrator | 2025-04-10 00:30:56.827093 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-10 00:30:56.828281 | orchestrator | Thursday 10 April 2025 00:30:56 +0000 (0:00:00.840) 0:00:44.256 ******** 2025-04-10 00:30:57.131599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:30:57.132271 | orchestrator | 2025-04-10 00:30:57.133014 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-10 00:30:57.133537 | orchestrator | Thursday 10 April 2025 00:30:57 +0000 (0:00:00.308) 0:00:44.564 ******** 2025-04-10 00:30:58.159031 | orchestrator | changed: [testbed-manager] 2025-04-10 00:30:58.160319 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:30:58.161137 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:30:58.162966 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:30:58.163492 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:30:58.164453 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:30:58.165582 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:30:58.166150 | orchestrator | 2025-04-10 00:30:58.167218 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-10 00:30:58.168504 | orchestrator | Thursday 10 April 2025 00:30:58 +0000 (0:00:01.026) 0:00:45.591 ******** 2025-04-10 00:30:58.237823 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:30:58.271380 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:30:58.294418 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:30:58.319028 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:30:58.494662 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:30:58.495962 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:30:58.499549 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:30:58.499805 | orchestrator | 2025-04-10 00:30:58.499860 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-10 00:30:58.499885 | orchestrator | Thursday 10 April 2025 00:30:58 +0000 (0:00:00.336) 0:00:45.927 ******** 2025-04-10 00:31:11.345469 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:31:11.345792 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:31:11.345854 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:31:11.346785 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:31:11.347574 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:31:11.348283 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:31:11.349809 | orchestrator | changed: [testbed-manager] 2025-04-10 00:31:11.351045 | orchestrator | 2025-04-10 00:31:11.352143 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-10 00:31:11.353007 | orchestrator | Thursday 10 April 2025 00:31:11 +0000 (0:00:12.845) 0:00:58.772 ******** 2025-04-10 00:31:12.670807 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:12.674374 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:12.675625 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:12.675653 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:12.675668 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:12.675687 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:12.676508 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:12.677275 | orchestrator | 2025-04-10 00:31:12.678090 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-10 00:31:12.678552 | orchestrator | Thursday 10 April 2025 00:31:12 +0000 (0:00:01.330) 0:01:00.103 ******** 2025-04-10 00:31:13.555758 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:13.556555 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:13.556652 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:13.556674 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:13.557062 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:13.557522 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:13.560269 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:13.560813 | orchestrator | 2025-04-10 00:31:13.560843 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-10 00:31:13.561469 | orchestrator | Thursday 10 April 2025 00:31:13 +0000 (0:00:00.886) 0:01:00.989 ******** 2025-04-10 00:31:13.649298 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:13.691363 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:13.716374 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:13.748278 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:13.817964 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:13.818510 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:13.819513 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:13.820638 | orchestrator | 2025-04-10 00:31:13.821173 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-10 00:31:13.822124 | orchestrator | Thursday 10 April 2025 00:31:13 +0000 (0:00:00.261) 0:01:01.251 ******** 2025-04-10 00:31:13.912566 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:13.956543 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:13.995989 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:14.042945 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:14.114590 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:14.117119 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:14.117175 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:14.448434 | orchestrator | 2025-04-10 00:31:14.448576 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-10 00:31:14.448596 | orchestrator | Thursday 10 April 2025 00:31:14 +0000 (0:00:00.295) 0:01:01.546 ******** 2025-04-10 00:31:14.448628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:31:14.449347 | orchestrator | 2025-04-10 00:31:14.450248 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-10 00:31:14.451171 | orchestrator | Thursday 10 April 2025 00:31:14 +0000 (0:00:00.333) 0:01:01.880 ******** 2025-04-10 00:31:16.005328 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:16.008933 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:16.009709 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:16.010998 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:16.011750 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:16.012533 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:16.013587 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:16.014270 | orchestrator | 2025-04-10 00:31:16.015150 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-10 00:31:16.015681 | orchestrator | Thursday 10 April 2025 00:31:15 +0000 (0:00:01.556) 0:01:03.436 ******** 2025-04-10 00:31:16.603686 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:31:16.604541 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:31:16.605247 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:31:16.606086 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:31:16.606133 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:31:16.606608 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:31:16.606637 | orchestrator | changed: [testbed-manager] 2025-04-10 00:31:16.607189 | orchestrator | 2025-04-10 00:31:16.607530 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-10 00:31:16.608063 | orchestrator | Thursday 10 April 2025 00:31:16 +0000 (0:00:00.597) 0:01:04.034 ******** 2025-04-10 00:31:16.682376 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:16.709010 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:16.744478 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:16.776273 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:16.857980 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:16.858567 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:16.859198 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:16.859518 | orchestrator | 2025-04-10 00:31:16.859561 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-10 00:31:18.053590 | orchestrator | Thursday 10 April 2025 00:31:16 +0000 (0:00:00.257) 0:01:04.292 ******** 2025-04-10 00:31:18.053713 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:18.053774 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:18.054524 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:18.054942 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:18.056049 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:18.057691 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:18.058931 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:18.058980 | orchestrator | 2025-04-10 00:31:18.059664 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-10 00:31:18.059724 | orchestrator | Thursday 10 April 2025 00:31:18 +0000 (0:00:01.194) 0:01:05.486 ******** 2025-04-10 00:31:19.618826 | orchestrator | changed: [testbed-manager] 2025-04-10 00:31:19.620015 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:31:19.620713 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:31:19.620865 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:31:19.622472 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:31:19.624646 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:31:19.625146 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:31:19.625490 | orchestrator | 2025-04-10 00:31:19.626240 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-10 00:31:19.626796 | orchestrator | Thursday 10 April 2025 00:31:19 +0000 (0:00:01.565) 0:01:07.051 ******** 2025-04-10 00:31:26.441297 | orchestrator | ok: [testbed-manager] 2025-04-10 00:31:26.441449 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:31:26.445398 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:31:26.447004 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:31:26.447075 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:31:26.447593 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:31:26.448247 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:31:26.448974 | orchestrator | 2025-04-10 00:31:26.450269 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-10 00:32:05.814565 | orchestrator | Thursday 10 April 2025 00:31:26 +0000 (0:00:06.821) 0:01:13.873 ******** 2025-04-10 00:32:05.814712 | orchestrator | ok: [testbed-manager] 2025-04-10 00:32:05.816199 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:32:05.816226 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:32:05.816245 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:32:05.817416 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:32:05.818604 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:32:05.819557 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:32:05.820544 | orchestrator | 2025-04-10 00:32:05.821973 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-10 00:32:05.823864 | orchestrator | Thursday 10 April 2025 00:32:05 +0000 (0:00:39.370) 0:01:53.243 ******** 2025-04-10 00:33:28.091178 | orchestrator | changed: [testbed-manager] 2025-04-10 00:33:28.091576 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:33:28.091614 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:33:28.091630 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:33:28.091644 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:33:28.091688 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:33:28.093892 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:33:28.094209 | orchestrator | 2025-04-10 00:33:28.095396 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-10 00:33:28.095913 | orchestrator | Thursday 10 April 2025 00:33:28 +0000 (0:01:22.271) 0:03:15.515 ******** 2025-04-10 00:33:29.668823 | orchestrator | ok: [testbed-manager] 2025-04-10 00:33:29.669871 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:33:29.669912 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:33:29.670614 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:33:29.671974 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:33:29.672654 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:33:29.673701 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:33:29.674813 | orchestrator | 2025-04-10 00:33:29.676395 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-10 00:33:29.678961 | orchestrator | Thursday 10 April 2025 00:33:29 +0000 (0:00:01.583) 0:03:17.098 ******** 2025-04-10 00:33:42.854140 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:33:42.854334 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:33:42.854358 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:33:42.854374 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:33:42.854394 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:33:42.854802 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:33:42.855544 | orchestrator | changed: [testbed-manager] 2025-04-10 00:33:42.855979 | orchestrator | 2025-04-10 00:33:42.856443 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-10 00:33:42.856944 | orchestrator | Thursday 10 April 2025 00:33:42 +0000 (0:00:13.179) 0:03:30.278 ******** 2025-04-10 00:33:43.275438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-10 00:33:43.276603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-10 00:33:43.276680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-10 00:33:43.280332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-10 00:33:43.280397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-10 00:33:43.280434 | orchestrator | 2025-04-10 00:33:43.280449 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-10 00:33:43.280469 | orchestrator | Thursday 10 April 2025 00:33:43 +0000 (0:00:00.430) 0:03:30.708 ******** 2025-04-10 00:33:43.342166 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-10 00:33:43.373107 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-10 00:33:43.373237 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:33:43.408856 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:33:43.412450 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-10 00:33:43.438327 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:33:43.438425 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-10 00:33:43.469917 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:33:44.026867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-10 00:33:44.027734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-10 00:33:44.027777 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-10 00:33:44.028442 | orchestrator | 2025-04-10 00:33:44.029014 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-10 00:33:44.029490 | orchestrator | Thursday 10 April 2025 00:33:44 +0000 (0:00:00.751) 0:03:31.460 ******** 2025-04-10 00:33:44.121350 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-10 00:33:44.122165 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-10 00:33:44.122570 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-10 00:33:44.122811 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-10 00:33:44.123558 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-10 00:33:44.124122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-10 00:33:44.124195 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-10 00:33:44.124491 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-10 00:33:44.125925 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-10 00:33:44.126119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-10 00:33:44.126152 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-10 00:33:44.165799 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-10 00:33:44.165974 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-10 00:33:44.166243 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-10 00:33:44.166424 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-10 00:33:44.166649 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-10 00:33:44.167110 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-10 00:33:44.167478 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-10 00:33:44.167677 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-10 00:33:44.170251 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-10 00:33:44.210525 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-10 00:33:44.210642 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-10 00:33:44.210671 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-10 00:33:44.210701 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-10 00:33:44.210801 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:33:44.210839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-10 00:33:44.211153 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-10 00:33:44.211183 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-10 00:33:44.211258 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-10 00:33:44.212148 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-10 00:33:44.212329 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-10 00:33:44.212518 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-10 00:33:44.248164 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:33:44.248728 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-10 00:33:44.249207 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-10 00:33:44.249860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-10 00:33:44.251096 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-10 00:33:44.252089 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-10 00:33:44.252392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-10 00:33:44.252423 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-10 00:33:44.253160 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-10 00:33:44.275526 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-10 00:33:44.275617 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:33:47.884009 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:33:47.884647 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-10 00:33:47.885261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-10 00:33:47.885931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-10 00:33:47.886522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-10 00:33:47.895006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-10 00:33:47.895814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-10 00:33:47.895844 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-10 00:33:47.895861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-10 00:33:47.895876 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-10 00:33:47.895892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-10 00:33:47.895908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-10 00:33:47.895923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-10 00:33:47.895939 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-10 00:33:47.895954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-10 00:33:47.895969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-10 00:33:47.895986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-10 00:33:47.896006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-10 00:33:47.896712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-10 00:33:47.896807 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-10 00:33:47.897333 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-10 00:33:47.898179 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-10 00:33:47.898950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-10 00:33:47.899257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-10 00:33:47.900301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-10 00:33:47.900995 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-10 00:33:47.901024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-10 00:33:47.901646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-10 00:33:47.902319 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-10 00:33:47.902787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-10 00:33:47.903231 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-10 00:33:47.903973 | orchestrator | 2025-04-10 00:33:47.904297 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-10 00:33:47.905062 | orchestrator | Thursday 10 April 2025 00:33:47 +0000 (0:00:03.854) 0:03:35.314 ******** 2025-04-10 00:33:49.449948 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.450311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.451229 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.451942 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.452769 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.453550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.454003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-10 00:33:49.454498 | orchestrator | 2025-04-10 00:33:49.454920 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-10 00:33:49.456471 | orchestrator | Thursday 10 April 2025 00:33:49 +0000 (0:00:01.568) 0:03:36.882 ******** 2025-04-10 00:33:49.512789 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-10 00:33:49.531929 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:33:49.619190 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-10 00:33:49.969245 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:33:49.969429 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-10 00:33:49.969855 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:33:49.969895 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-10 00:33:49.969918 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:33:49.970220 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-10 00:33:49.970719 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-10 00:33:49.972168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-10 00:33:49.973210 | orchestrator | 2025-04-10 00:33:49.974797 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-10 00:33:50.037316 | orchestrator | Thursday 10 April 2025 00:33:49 +0000 (0:00:00.518) 0:03:37.401 ******** 2025-04-10 00:33:50.037452 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-10 00:33:50.069362 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:33:50.153867 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-10 00:33:50.576859 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-10 00:33:50.581101 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:33:50.581155 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:33:50.581237 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-10 00:33:50.581254 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:33:50.581266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-10 00:33:50.581278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-10 00:33:50.581290 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-10 00:33:50.581304 | orchestrator | 2025-04-10 00:33:50.583417 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-10 00:33:50.583501 | orchestrator | Thursday 10 April 2025 00:33:50 +0000 (0:00:00.606) 0:03:38.008 ******** 2025-04-10 00:33:50.658994 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:33:50.685064 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:33:50.711967 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:33:50.739576 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:33:50.879860 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:33:50.880198 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:33:50.881294 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:33:50.884288 | orchestrator | 2025-04-10 00:33:56.612172 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-10 00:33:56.612307 | orchestrator | Thursday 10 April 2025 00:33:50 +0000 (0:00:00.305) 0:03:38.313 ******** 2025-04-10 00:33:56.612343 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:33:56.612426 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:33:56.612812 | orchestrator | ok: [testbed-manager] 2025-04-10 00:33:56.612844 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:33:56.613071 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:33:56.613307 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:33:56.613673 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:33:56.613957 | orchestrator | 2025-04-10 00:33:56.614196 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-10 00:33:56.614228 | orchestrator | Thursday 10 April 2025 00:33:56 +0000 (0:00:05.730) 0:03:44.044 ******** 2025-04-10 00:33:56.665059 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-10 00:33:56.721894 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:33:56.722147 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-10 00:33:56.779424 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:33:56.779734 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-10 00:33:56.837451 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:33:56.876682 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-10 00:33:56.876746 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:33:56.924297 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-10 00:33:56.924370 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:33:56.994961 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-10 00:33:56.995007 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:33:56.995977 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-10 00:33:57.000126 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:33:58.037513 | orchestrator | 2025-04-10 00:33:58.037623 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-10 00:33:58.037639 | orchestrator | Thursday 10 April 2025 00:33:56 +0000 (0:00:00.385) 0:03:44.429 ******** 2025-04-10 00:33:58.037664 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-10 00:33:58.038757 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-10 00:33:58.038786 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-10 00:33:58.038976 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-10 00:33:58.039434 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-10 00:33:58.039465 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-10 00:33:58.040854 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-10 00:33:58.041736 | orchestrator | 2025-04-10 00:33:58.042066 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-10 00:33:58.042821 | orchestrator | Thursday 10 April 2025 00:33:58 +0000 (0:00:01.039) 0:03:45.469 ******** 2025-04-10 00:33:58.556242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:33:58.556492 | orchestrator | 2025-04-10 00:33:58.557224 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-10 00:33:58.557930 | orchestrator | Thursday 10 April 2025 00:33:58 +0000 (0:00:00.517) 0:03:45.986 ******** 2025-04-10 00:33:59.694100 | orchestrator | ok: [testbed-manager] 2025-04-10 00:33:59.694843 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:33:59.695125 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:33:59.695789 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:33:59.697377 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:33:59.697746 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:33:59.697764 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:33:59.697775 | orchestrator | 2025-04-10 00:33:59.698656 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-10 00:33:59.698972 | orchestrator | Thursday 10 April 2025 00:33:59 +0000 (0:00:01.141) 0:03:47.128 ******** 2025-04-10 00:34:00.306571 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:00.307227 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:00.309608 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:00.311187 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:00.311616 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:00.312806 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:00.313785 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:00.314565 | orchestrator | 2025-04-10 00:34:00.315372 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-10 00:34:00.316487 | orchestrator | Thursday 10 April 2025 00:34:00 +0000 (0:00:00.612) 0:03:47.740 ******** 2025-04-10 00:34:00.958720 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:00.959472 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:00.959567 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:00.963173 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:00.963865 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:00.964630 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:00.965084 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:00.965931 | orchestrator | 2025-04-10 00:34:00.966412 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-10 00:34:00.967142 | orchestrator | Thursday 10 April 2025 00:34:00 +0000 (0:00:00.650) 0:03:48.391 ******** 2025-04-10 00:34:01.546303 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:01.546530 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:01.547065 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:01.547506 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:01.548522 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:01.550000 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:01.551049 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:01.551253 | orchestrator | 2025-04-10 00:34:01.552278 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-10 00:34:01.553336 | orchestrator | Thursday 10 April 2025 00:34:01 +0000 (0:00:00.588) 0:03:48.979 ******** 2025-04-10 00:34:02.606767 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243468.89809, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.608307 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243475.295491, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.608358 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243479.0137084, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.608533 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243490.1269681, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.608558 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243490.9557626, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.608580 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243481.3653822, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.609104 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744243489.6826992, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.609398 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243502.2345026, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.610167 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243404.8618727, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.610643 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243418.9095697, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.611324 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243420.3800428, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.612160 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243406.914991, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.613086 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243406.833296, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.613119 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744243418.7598114, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 00:34:02.613397 | orchestrator | 2025-04-10 00:34:02.613428 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-10 00:34:02.614095 | orchestrator | Thursday 10 April 2025 00:34:02 +0000 (0:00:01.059) 0:03:50.039 ******** 2025-04-10 00:34:03.758756 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:03.762235 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:03.763542 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:03.763576 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:03.763594 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:03.764006 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:03.765507 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:03.766129 | orchestrator | 2025-04-10 00:34:03.767385 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-10 00:34:03.767462 | orchestrator | Thursday 10 April 2025 00:34:03 +0000 (0:00:01.152) 0:03:51.192 ******** 2025-04-10 00:34:04.940992 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:04.942133 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:04.942291 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:04.942401 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:04.943342 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:04.943489 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:04.945119 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:04.945260 | orchestrator | 2025-04-10 00:34:04.945282 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-10 00:34:04.945301 | orchestrator | Thursday 10 April 2025 00:34:04 +0000 (0:00:01.181) 0:03:52.373 ******** 2025-04-10 00:34:05.008288 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:34:05.077402 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:34:05.119313 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:34:05.151068 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:34:05.228132 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:34:05.228271 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:34:05.229146 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:34:05.230002 | orchestrator | 2025-04-10 00:34:05.231774 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-10 00:34:05.232509 | orchestrator | Thursday 10 April 2025 00:34:05 +0000 (0:00:00.288) 0:03:52.661 ******** 2025-04-10 00:34:05.974571 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:05.975344 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:05.975919 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:05.976679 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:05.977001 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:05.977815 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:05.979626 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:05.981466 | orchestrator | 2025-04-10 00:34:05.981636 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-10 00:34:05.983445 | orchestrator | Thursday 10 April 2025 00:34:05 +0000 (0:00:00.744) 0:03:53.406 ******** 2025-04-10 00:34:06.398344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:34:06.399859 | orchestrator | 2025-04-10 00:34:06.400904 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-10 00:34:06.401926 | orchestrator | Thursday 10 April 2025 00:34:06 +0000 (0:00:00.425) 0:03:53.831 ******** 2025-04-10 00:34:14.066909 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:14.068369 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:14.068505 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:14.068522 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:14.068547 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:14.068601 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:14.068614 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:14.068625 | orchestrator | 2025-04-10 00:34:14.068641 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-10 00:34:14.069948 | orchestrator | Thursday 10 April 2025 00:34:14 +0000 (0:00:07.663) 0:04:01.495 ******** 2025-04-10 00:34:15.269515 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:15.270383 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:15.271355 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:15.271397 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:15.271880 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:15.272491 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:15.273888 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:15.274713 | orchestrator | 2025-04-10 00:34:15.275601 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-10 00:34:15.276576 | orchestrator | Thursday 10 April 2025 00:34:15 +0000 (0:00:01.207) 0:04:02.702 ******** 2025-04-10 00:34:16.280545 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:16.280882 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:16.284329 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:16.284786 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:16.284813 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:16.284829 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:16.284848 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:16.285084 | orchestrator | 2025-04-10 00:34:16.285438 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-10 00:34:16.285836 | orchestrator | Thursday 10 April 2025 00:34:16 +0000 (0:00:01.010) 0:04:03.712 ******** 2025-04-10 00:34:16.726582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:34:16.726736 | orchestrator | 2025-04-10 00:34:16.726801 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-10 00:34:16.726864 | orchestrator | Thursday 10 April 2025 00:34:16 +0000 (0:00:00.447) 0:04:04.160 ******** 2025-04-10 00:34:25.387812 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:25.388135 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:25.388174 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:25.388228 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:25.391392 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:25.391456 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:25.391479 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:25.391610 | orchestrator | 2025-04-10 00:34:25.392000 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-10 00:34:25.392434 | orchestrator | Thursday 10 April 2025 00:34:25 +0000 (0:00:08.659) 0:04:12.820 ******** 2025-04-10 00:34:26.008355 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:26.008967 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:26.009142 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:26.009755 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:26.010188 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:26.010794 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:26.011241 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:26.012268 | orchestrator | 2025-04-10 00:34:26.012582 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-10 00:34:26.013173 | orchestrator | Thursday 10 April 2025 00:34:26 +0000 (0:00:00.621) 0:04:13.441 ******** 2025-04-10 00:34:27.172903 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:27.173475 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:27.173516 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:27.174854 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:27.175067 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:27.176157 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:27.177269 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:27.177313 | orchestrator | 2025-04-10 00:34:27.178954 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-10 00:34:27.180353 | orchestrator | Thursday 10 April 2025 00:34:27 +0000 (0:00:01.161) 0:04:14.603 ******** 2025-04-10 00:34:28.270778 | orchestrator | changed: [testbed-manager] 2025-04-10 00:34:28.272129 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:34:28.272945 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:34:28.273546 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:34:28.274352 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:34:28.274935 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:34:28.275803 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:34:28.276084 | orchestrator | 2025-04-10 00:34:28.276466 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-10 00:34:28.277122 | orchestrator | Thursday 10 April 2025 00:34:28 +0000 (0:00:01.096) 0:04:15.700 ******** 2025-04-10 00:34:28.389838 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:28.429398 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:28.462263 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:28.500121 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:28.582967 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:28.585119 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:28.586803 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:28.587075 | orchestrator | 2025-04-10 00:34:28.588353 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-10 00:34:28.590142 | orchestrator | Thursday 10 April 2025 00:34:28 +0000 (0:00:00.315) 0:04:16.016 ******** 2025-04-10 00:34:28.689571 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:28.726463 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:28.785649 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:28.825105 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:28.928319 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:28.928505 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:28.928754 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:28.929225 | orchestrator | 2025-04-10 00:34:28.930578 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-10 00:34:29.068632 | orchestrator | Thursday 10 April 2025 00:34:28 +0000 (0:00:00.345) 0:04:16.361 ******** 2025-04-10 00:34:29.068752 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:29.105984 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:29.140959 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:29.180143 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:29.257484 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:29.258094 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:29.258191 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:29.259082 | orchestrator | 2025-04-10 00:34:29.259616 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-10 00:34:29.260070 | orchestrator | Thursday 10 April 2025 00:34:29 +0000 (0:00:00.330) 0:04:16.692 ******** 2025-04-10 00:34:35.058840 | orchestrator | ok: [testbed-manager] 2025-04-10 00:34:35.059003 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:34:35.059072 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:34:35.059355 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:34:35.059734 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:34:35.060182 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:34:35.060504 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:34:35.061238 | orchestrator | 2025-04-10 00:34:35.061884 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-10 00:34:35.064037 | orchestrator | Thursday 10 April 2025 00:34:35 +0000 (0:00:05.798) 0:04:22.490 ******** 2025-04-10 00:34:35.573481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:34:35.574723 | orchestrator | 2025-04-10 00:34:35.576717 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-10 00:34:35.579628 | orchestrator | Thursday 10 April 2025 00:34:35 +0000 (0:00:00.511) 0:04:23.002 ******** 2025-04-10 00:34:35.638761 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.639815 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-10 00:34:35.683299 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:34:35.736284 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.737399 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-10 00:34:35.737928 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.738008 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-10 00:34:35.781778 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:34:35.782517 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.784905 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-10 00:34:35.821711 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:34:35.822229 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.825215 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-10 00:34:35.859647 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:34:35.859785 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.969072 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-10 00:34:35.969240 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:34:35.969568 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:34:35.971174 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-10 00:34:35.972374 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-10 00:34:35.973272 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:34:35.973314 | orchestrator | 2025-04-10 00:34:35.973829 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-10 00:34:35.974445 | orchestrator | Thursday 10 April 2025 00:34:35 +0000 (0:00:00.397) 0:04:23.400 ******** 2025-04-10 00:34:36.408200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:34:36.408421 | orchestrator | 2025-04-10 00:34:36.408449 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-10 00:34:36.411652 | orchestrator | Thursday 10 April 2025 00:34:36 +0000 (0:00:00.441) 0:04:23.841 ******** 2025-04-10 00:34:36.483649 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-10 00:34:36.483898 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-10 00:34:36.520481 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:34:36.557213 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:34:36.560641 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-10 00:34:36.593646 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:34:36.595631 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-10 00:34:36.640626 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-10 00:34:36.721711 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:34:36.721816 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-10 00:34:36.721848 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:34:36.723463 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:34:36.724791 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-10 00:34:36.726460 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:34:36.729331 | orchestrator | 2025-04-10 00:34:37.301393 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-10 00:34:37.301515 | orchestrator | Thursday 10 April 2025 00:34:36 +0000 (0:00:00.315) 0:04:24.156 ******** 2025-04-10 00:34:37.301549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:34:37.302228 | orchestrator | 2025-04-10 00:34:37.304800 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-10 00:34:37.305060 | orchestrator | Thursday 10 April 2025 00:34:37 +0000 (0:00:00.578) 0:04:24.735 ******** 2025-04-10 00:35:11.188372 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:11.188490 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:11.188503 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:11.191557 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:11.192584 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:11.193045 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:11.193899 | orchestrator | changed: [testbed-manager] 2025-04-10 00:35:11.195447 | orchestrator | 2025-04-10 00:35:11.196197 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-10 00:35:11.196864 | orchestrator | Thursday 10 April 2025 00:35:11 +0000 (0:00:33.880) 0:04:58.615 ******** 2025-04-10 00:35:18.590825 | orchestrator | changed: [testbed-manager] 2025-04-10 00:35:18.591111 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:18.591153 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:18.591205 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:18.591253 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:18.591503 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:18.591624 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:18.591946 | orchestrator | 2025-04-10 00:35:18.592144 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-10 00:35:18.595057 | orchestrator | Thursday 10 April 2025 00:35:18 +0000 (0:00:07.406) 0:05:06.022 ******** 2025-04-10 00:35:25.833728 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:25.834783 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:25.834824 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:25.834848 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:25.837463 | orchestrator | changed: [testbed-manager] 2025-04-10 00:35:25.837650 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:25.838210 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:25.838936 | orchestrator | 2025-04-10 00:35:25.840082 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-10 00:35:25.840865 | orchestrator | Thursday 10 April 2025 00:35:25 +0000 (0:00:07.240) 0:05:13.263 ******** 2025-04-10 00:35:27.416657 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:35:27.417141 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:35:27.417424 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:27.417726 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:35:27.418185 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:35:27.418950 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:35:27.419175 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:35:27.419745 | orchestrator | 2025-04-10 00:35:27.420567 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-10 00:35:27.420638 | orchestrator | Thursday 10 April 2025 00:35:27 +0000 (0:00:01.585) 0:05:14.848 ******** 2025-04-10 00:35:33.219735 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:33.221411 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:33.223264 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:33.224349 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:33.226264 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:33.226832 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:33.229742 | orchestrator | changed: [testbed-manager] 2025-04-10 00:35:33.669695 | orchestrator | 2025-04-10 00:35:33.669856 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-10 00:35:33.669888 | orchestrator | Thursday 10 April 2025 00:35:33 +0000 (0:00:05.803) 0:05:20.652 ******** 2025-04-10 00:35:33.669936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:35:33.671059 | orchestrator | 2025-04-10 00:35:33.672133 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-10 00:35:33.673069 | orchestrator | Thursday 10 April 2025 00:35:33 +0000 (0:00:00.450) 0:05:21.103 ******** 2025-04-10 00:35:34.396766 | orchestrator | changed: [testbed-manager] 2025-04-10 00:35:34.397377 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:34.398000 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:34.398999 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:34.400162 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:34.400466 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:34.401624 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:34.401975 | orchestrator | 2025-04-10 00:35:34.402286 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-10 00:35:34.403492 | orchestrator | Thursday 10 April 2025 00:35:34 +0000 (0:00:00.726) 0:05:21.829 ******** 2025-04-10 00:35:35.934473 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:35:35.934930 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:35:35.936025 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:35.936673 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:35:35.937787 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:35:35.939123 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:35:35.939367 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:35:35.940344 | orchestrator | 2025-04-10 00:35:35.940910 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-10 00:35:35.941431 | orchestrator | Thursday 10 April 2025 00:35:35 +0000 (0:00:01.536) 0:05:23.366 ******** 2025-04-10 00:35:36.733928 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:36.734270 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:36.734303 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:36.735039 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:36.735427 | orchestrator | changed: [testbed-manager] 2025-04-10 00:35:36.736389 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:36.737710 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:36.737728 | orchestrator | 2025-04-10 00:35:36.738285 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-10 00:35:36.738856 | orchestrator | Thursday 10 April 2025 00:35:36 +0000 (0:00:00.801) 0:05:24.167 ******** 2025-04-10 00:35:36.837252 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:35:36.873791 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:35:36.908084 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:35:36.956379 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:35:37.031318 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:35:37.031879 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:35:37.032712 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:35:37.034732 | orchestrator | 2025-04-10 00:35:37.039245 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-10 00:35:37.039993 | orchestrator | Thursday 10 April 2025 00:35:37 +0000 (0:00:00.296) 0:05:24.464 ******** 2025-04-10 00:35:37.111977 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:35:37.156096 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:35:37.192797 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:35:37.225986 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:35:37.259172 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:35:37.474880 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:35:37.476234 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:35:37.476932 | orchestrator | 2025-04-10 00:35:37.477693 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-10 00:35:37.478371 | orchestrator | Thursday 10 April 2025 00:35:37 +0000 (0:00:00.442) 0:05:24.907 ******** 2025-04-10 00:35:37.580898 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:37.625887 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:35:37.664447 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:35:37.712093 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:35:37.793913 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:35:37.794203 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:35:37.794243 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:35:37.796301 | orchestrator | 2025-04-10 00:35:37.796428 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-10 00:35:37.798267 | orchestrator | Thursday 10 April 2025 00:35:37 +0000 (0:00:00.320) 0:05:25.227 ******** 2025-04-10 00:35:37.883796 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:35:37.919637 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:35:37.954443 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:35:37.992683 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:35:38.046495 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:35:38.122606 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:35:38.123164 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:35:38.126722 | orchestrator | 2025-04-10 00:35:38.127520 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-10 00:35:38.128113 | orchestrator | Thursday 10 April 2025 00:35:38 +0000 (0:00:00.327) 0:05:25.555 ******** 2025-04-10 00:35:38.227924 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:38.262167 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:35:38.291796 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:35:38.328210 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:35:38.415512 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:35:38.416148 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:35:38.417162 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:35:38.425735 | orchestrator | 2025-04-10 00:35:38.491670 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-10 00:35:38.491741 | orchestrator | Thursday 10 April 2025 00:35:38 +0000 (0:00:00.293) 0:05:25.848 ******** 2025-04-10 00:35:38.491770 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:35:38.521990 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:35:38.556167 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:35:38.641251 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:35:38.700333 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:35:38.701706 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:35:38.702181 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:35:38.702722 | orchestrator | 2025-04-10 00:35:38.703581 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-10 00:35:38.704447 | orchestrator | Thursday 10 April 2025 00:35:38 +0000 (0:00:00.286) 0:05:26.135 ******** 2025-04-10 00:35:38.772375 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:35:38.804189 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:35:38.890366 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:35:39.047197 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:35:39.118429 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:35:39.119289 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:35:39.120053 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:35:39.120512 | orchestrator | 2025-04-10 00:35:39.123332 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-10 00:35:39.583287 | orchestrator | Thursday 10 April 2025 00:35:39 +0000 (0:00:00.415) 0:05:26.551 ******** 2025-04-10 00:35:39.583420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:35:39.587227 | orchestrator | 2025-04-10 00:35:39.587360 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-10 00:35:39.587682 | orchestrator | Thursday 10 April 2025 00:35:39 +0000 (0:00:00.463) 0:05:27.015 ******** 2025-04-10 00:35:40.458485 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:40.459314 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:35:40.459363 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:35:40.459384 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:35:40.459417 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:35:40.459740 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:35:40.460681 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:35:40.461259 | orchestrator | 2025-04-10 00:35:40.461309 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-10 00:35:40.462481 | orchestrator | Thursday 10 April 2025 00:35:40 +0000 (0:00:00.875) 0:05:27.890 ******** 2025-04-10 00:35:43.327524 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:35:43.328571 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:35:43.328889 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:35:43.330438 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:35:43.330960 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:35:43.332844 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:43.333225 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:35:43.334109 | orchestrator | 2025-04-10 00:35:43.335067 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-10 00:35:43.335633 | orchestrator | Thursday 10 April 2025 00:35:43 +0000 (0:00:02.869) 0:05:30.759 ******** 2025-04-10 00:35:43.396584 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-10 00:35:43.397406 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-10 00:35:43.499769 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-10 00:35:43.501325 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-10 00:35:43.502247 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-10 00:35:43.503043 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-10 00:35:43.573240 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:35:43.574190 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-10 00:35:43.574683 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-10 00:35:43.673243 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:35:43.675112 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-10 00:35:43.675157 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-10 00:35:43.675234 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-10 00:35:43.675259 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-10 00:35:43.766492 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:35:43.767545 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-10 00:35:43.767976 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-10 00:35:43.769317 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-10 00:35:43.840187 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:35:43.840694 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-10 00:35:43.842375 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-10 00:35:43.976560 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:35:43.976968 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-10 00:35:43.977962 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:35:43.979296 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-10 00:35:43.980137 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-10 00:35:43.981089 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-10 00:35:43.981604 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:35:43.982563 | orchestrator | 2025-04-10 00:35:43.983639 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-10 00:35:43.984123 | orchestrator | Thursday 10 April 2025 00:35:43 +0000 (0:00:00.648) 0:05:31.407 ******** 2025-04-10 00:35:50.093954 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:50.094990 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:50.095151 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:50.099485 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:50.099599 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:50.099620 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:50.099640 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:50.101978 | orchestrator | 2025-04-10 00:35:51.194623 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-10 00:35:51.194747 | orchestrator | Thursday 10 April 2025 00:35:50 +0000 (0:00:06.119) 0:05:37.526 ******** 2025-04-10 00:35:51.194781 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:51.194851 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:51.196207 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:51.196849 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:51.197960 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:51.199256 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:51.200108 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:51.201121 | orchestrator | 2025-04-10 00:35:51.201399 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-10 00:35:51.202284 | orchestrator | Thursday 10 April 2025 00:35:51 +0000 (0:00:01.098) 0:05:38.625 ******** 2025-04-10 00:35:58.752536 | orchestrator | ok: [testbed-manager] 2025-04-10 00:35:58.752743 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:35:58.753577 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:35:58.753623 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:35:58.753654 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:35:58.756563 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:35:58.757054 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:35:58.757133 | orchestrator | 2025-04-10 00:35:58.757166 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-10 00:35:58.757250 | orchestrator | Thursday 10 April 2025 00:35:58 +0000 (0:00:07.558) 0:05:46.183 ******** 2025-04-10 00:36:01.929456 | orchestrator | changed: [testbed-manager] 2025-04-10 00:36:01.930460 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:01.930940 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:01.932680 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:01.934867 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:01.935171 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:01.936157 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:01.936311 | orchestrator | 2025-04-10 00:36:01.937658 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-10 00:36:01.941118 | orchestrator | Thursday 10 April 2025 00:36:01 +0000 (0:00:03.178) 0:05:49.362 ******** 2025-04-10 00:36:03.467238 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:03.468182 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:03.468206 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:03.468216 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:03.468225 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:03.468233 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:03.468242 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:03.468275 | orchestrator | 2025-04-10 00:36:03.469108 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-10 00:36:03.469706 | orchestrator | Thursday 10 April 2025 00:36:03 +0000 (0:00:01.534) 0:05:50.897 ******** 2025-04-10 00:36:04.835380 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:04.835583 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:04.836700 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:04.838536 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:04.839234 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:04.839278 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:04.839376 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:04.840036 | orchestrator | 2025-04-10 00:36:04.840606 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-10 00:36:04.841112 | orchestrator | Thursday 10 April 2025 00:36:04 +0000 (0:00:01.370) 0:05:52.267 ******** 2025-04-10 00:36:05.045980 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:05.109037 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:05.202845 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:05.271270 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:05.506450 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:05.506663 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:05.507992 | orchestrator | changed: [testbed-manager] 2025-04-10 00:36:05.508890 | orchestrator | 2025-04-10 00:36:05.510270 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-10 00:36:05.511210 | orchestrator | Thursday 10 April 2025 00:36:05 +0000 (0:00:00.671) 0:05:52.938 ******** 2025-04-10 00:36:15.177615 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:15.179553 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:15.181379 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:15.184187 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:15.184753 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:15.184776 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:15.184792 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:15.187690 | orchestrator | 2025-04-10 00:36:15.189141 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-10 00:36:15.191423 | orchestrator | Thursday 10 April 2025 00:36:15 +0000 (0:00:09.673) 0:06:02.611 ******** 2025-04-10 00:36:16.158684 | orchestrator | changed: [testbed-manager] 2025-04-10 00:36:16.159277 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:16.159364 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:16.160243 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:16.160649 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:16.161715 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:16.161990 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:16.162665 | orchestrator | 2025-04-10 00:36:16.163251 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-10 00:36:16.163954 | orchestrator | Thursday 10 April 2025 00:36:16 +0000 (0:00:00.979) 0:06:03.591 ******** 2025-04-10 00:36:28.575922 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:28.576576 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:28.576696 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:28.576716 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:28.576747 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:28.577245 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:28.577961 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:28.579050 | orchestrator | 2025-04-10 00:36:28.579760 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-10 00:36:28.580513 | orchestrator | Thursday 10 April 2025 00:36:28 +0000 (0:00:12.413) 0:06:16.004 ******** 2025-04-10 00:36:40.638354 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:40.639144 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:40.639188 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:40.639205 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:40.639221 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:40.639237 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:40.639272 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:40.639297 | orchestrator | 2025-04-10 00:36:40.639914 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-10 00:36:40.640744 | orchestrator | Thursday 10 April 2025 00:36:40 +0000 (0:00:12.059) 0:06:28.063 ******** 2025-04-10 00:36:41.081931 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-10 00:36:41.082703 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-10 00:36:41.861403 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-10 00:36:41.861580 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-10 00:36:41.861611 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-10 00:36:41.863546 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-10 00:36:41.863782 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-10 00:36:41.863813 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-10 00:36:41.864790 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-10 00:36:41.865770 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-10 00:36:41.866776 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-10 00:36:41.867397 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-10 00:36:41.868189 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-10 00:36:41.869131 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-10 00:36:41.869919 | orchestrator | 2025-04-10 00:36:41.870157 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-10 00:36:41.871120 | orchestrator | Thursday 10 April 2025 00:36:41 +0000 (0:00:01.228) 0:06:29.292 ******** 2025-04-10 00:36:42.006552 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:36:42.072841 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:42.136368 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:42.220326 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:42.291188 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:42.419522 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:42.419894 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:42.420699 | orchestrator | 2025-04-10 00:36:42.421015 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-10 00:36:42.421367 | orchestrator | Thursday 10 April 2025 00:36:42 +0000 (0:00:00.560) 0:06:29.853 ******** 2025-04-10 00:36:45.988788 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:45.989795 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:45.995480 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:45.995839 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:45.996917 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:45.997343 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:45.998428 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:45.999238 | orchestrator | 2025-04-10 00:36:46.000798 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-10 00:36:46.001834 | orchestrator | Thursday 10 April 2025 00:36:45 +0000 (0:00:03.568) 0:06:33.421 ******** 2025-04-10 00:36:46.118347 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:36:46.368209 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:46.434718 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:46.506140 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:46.617976 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:46.726290 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:46.726424 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:46.726807 | orchestrator | 2025-04-10 00:36:46.727244 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-10 00:36:46.727661 | orchestrator | Thursday 10 April 2025 00:36:46 +0000 (0:00:00.738) 0:06:34.159 ******** 2025-04-10 00:36:46.821328 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-10 00:36:46.904176 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-10 00:36:46.904287 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:36:46.904342 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-10 00:36:46.904362 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-10 00:36:46.981387 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:46.981638 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-10 00:36:46.981676 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-10 00:36:47.073292 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:47.073799 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-10 00:36:47.074072 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-10 00:36:47.155399 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:47.155645 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-10 00:36:47.156418 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-10 00:36:47.230430 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:47.231353 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-10 00:36:47.232047 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-10 00:36:47.357258 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:47.358133 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-10 00:36:47.359319 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-10 00:36:47.362233 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:47.498463 | orchestrator | 2025-04-10 00:36:47.498571 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-10 00:36:47.498588 | orchestrator | Thursday 10 April 2025 00:36:47 +0000 (0:00:00.629) 0:06:34.789 ******** 2025-04-10 00:36:47.498620 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:36:47.571073 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:47.646736 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:47.709535 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:47.775465 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:47.912511 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:47.912963 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:47.913459 | orchestrator | 2025-04-10 00:36:47.913962 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-10 00:36:47.914874 | orchestrator | Thursday 10 April 2025 00:36:47 +0000 (0:00:00.555) 0:06:35.345 ******** 2025-04-10 00:36:48.056918 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:36:48.129584 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:48.195932 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:48.266760 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:48.330810 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:48.436275 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:48.437625 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:48.441237 | orchestrator | 2025-04-10 00:36:48.580251 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-10 00:36:48.580399 | orchestrator | Thursday 10 April 2025 00:36:48 +0000 (0:00:00.523) 0:06:35.868 ******** 2025-04-10 00:36:48.580436 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:36:48.645395 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:36:48.710470 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:36:48.809074 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:36:48.890710 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:36:49.030464 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:36:49.030671 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:36:49.030700 | orchestrator | 2025-04-10 00:36:49.034232 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-10 00:36:54.894641 | orchestrator | Thursday 10 April 2025 00:36:49 +0000 (0:00:00.593) 0:06:36.462 ******** 2025-04-10 00:36:54.894790 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:54.894897 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:54.896074 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:54.897097 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:54.898297 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:54.899058 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:54.899740 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:54.900366 | orchestrator | 2025-04-10 00:36:54.900970 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-10 00:36:54.901437 | orchestrator | Thursday 10 April 2025 00:36:54 +0000 (0:00:05.865) 0:06:42.328 ******** 2025-04-10 00:36:55.749036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:36:55.749217 | orchestrator | 2025-04-10 00:36:55.753370 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-10 00:36:56.609695 | orchestrator | Thursday 10 April 2025 00:36:55 +0000 (0:00:00.852) 0:06:43.180 ******** 2025-04-10 00:36:56.609830 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:56.610194 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:56.611353 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:56.612179 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:56.612683 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:56.614251 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:56.614601 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:56.615447 | orchestrator | 2025-04-10 00:36:56.615919 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-10 00:36:56.617003 | orchestrator | Thursday 10 April 2025 00:36:56 +0000 (0:00:00.862) 0:06:44.043 ******** 2025-04-10 00:36:57.706769 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:57.707565 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:57.707618 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:57.708593 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:57.709532 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:57.710101 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:57.710355 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:57.711203 | orchestrator | 2025-04-10 00:36:57.712206 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-10 00:36:57.713251 | orchestrator | Thursday 10 April 2025 00:36:57 +0000 (0:00:01.092) 0:06:45.136 ******** 2025-04-10 00:36:59.176260 | orchestrator | ok: [testbed-manager] 2025-04-10 00:36:59.180755 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:36:59.182329 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:36:59.182364 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:36:59.182386 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:36:59.182950 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:36:59.184406 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:36:59.185483 | orchestrator | 2025-04-10 00:36:59.186068 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-10 00:36:59.187079 | orchestrator | Thursday 10 April 2025 00:36:59 +0000 (0:00:01.471) 0:06:46.607 ******** 2025-04-10 00:36:59.317173 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:00.553307 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:00.553871 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:00.553913 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:00.559699 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:00.561082 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:00.563120 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:00.563605 | orchestrator | 2025-04-10 00:37:00.565606 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-10 00:37:00.567782 | orchestrator | Thursday 10 April 2025 00:37:00 +0000 (0:00:01.379) 0:06:47.986 ******** 2025-04-10 00:37:01.918948 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:01.919565 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:01.920432 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:01.923808 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:01.924469 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:01.924513 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:01.926431 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:01.926644 | orchestrator | 2025-04-10 00:37:01.927675 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-10 00:37:01.928518 | orchestrator | Thursday 10 April 2025 00:37:01 +0000 (0:00:01.363) 0:06:49.350 ******** 2025-04-10 00:37:03.383817 | orchestrator | changed: [testbed-manager] 2025-04-10 00:37:03.386942 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:03.389550 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:03.389666 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:03.391190 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:03.392123 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:03.393288 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:03.393544 | orchestrator | 2025-04-10 00:37:03.394290 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-10 00:37:03.395050 | orchestrator | Thursday 10 April 2025 00:37:03 +0000 (0:00:01.463) 0:06:50.813 ******** 2025-04-10 00:37:04.545904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:37:04.546556 | orchestrator | 2025-04-10 00:37:04.546621 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-10 00:37:04.547527 | orchestrator | Thursday 10 April 2025 00:37:04 +0000 (0:00:01.164) 0:06:51.977 ******** 2025-04-10 00:37:05.957781 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:05.958899 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:05.960040 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:05.961140 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:05.962238 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:05.963193 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:05.963869 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:05.964694 | orchestrator | 2025-04-10 00:37:05.965631 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-10 00:37:05.966168 | orchestrator | Thursday 10 April 2025 00:37:05 +0000 (0:00:01.412) 0:06:53.389 ******** 2025-04-10 00:37:07.118089 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:07.119002 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:07.119278 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:07.119303 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:07.119719 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:07.122861 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:07.125599 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:07.126274 | orchestrator | 2025-04-10 00:37:07.126310 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-10 00:37:07.127302 | orchestrator | Thursday 10 April 2025 00:37:07 +0000 (0:00:01.160) 0:06:54.550 ******** 2025-04-10 00:37:08.281905 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:08.282274 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:08.282979 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:08.283513 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:08.284179 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:08.284751 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:08.285231 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:08.285727 | orchestrator | 2025-04-10 00:37:08.286672 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-10 00:37:08.287684 | orchestrator | Thursday 10 April 2025 00:37:08 +0000 (0:00:01.162) 0:06:55.712 ******** 2025-04-10 00:37:09.631660 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:09.631833 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:09.632668 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:09.633629 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:09.633981 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:09.634557 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:09.635032 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:09.635905 | orchestrator | 2025-04-10 00:37:09.636898 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-10 00:37:09.637585 | orchestrator | Thursday 10 April 2025 00:37:09 +0000 (0:00:01.351) 0:06:57.064 ******** 2025-04-10 00:37:10.895100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:37:10.895322 | orchestrator | 2025-04-10 00:37:10.896332 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.897222 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.925) 0:06:57.990 ******** 2025-04-10 00:37:10.899864 | orchestrator | 2025-04-10 00:37:10.900666 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.902436 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.049) 0:06:58.039 ******** 2025-04-10 00:37:10.903072 | orchestrator | 2025-04-10 00:37:10.903931 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.904665 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.052) 0:06:58.091 ******** 2025-04-10 00:37:10.905235 | orchestrator | 2025-04-10 00:37:10.905853 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.906563 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.046) 0:06:58.138 ******** 2025-04-10 00:37:10.907906 | orchestrator | 2025-04-10 00:37:10.908125 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.908827 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.039) 0:06:58.177 ******** 2025-04-10 00:37:10.909436 | orchestrator | 2025-04-10 00:37:10.910695 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.911163 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.045) 0:06:58.223 ******** 2025-04-10 00:37:10.911685 | orchestrator | 2025-04-10 00:37:10.912193 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-10 00:37:10.912736 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.059) 0:06:58.282 ******** 2025-04-10 00:37:10.913025 | orchestrator | 2025-04-10 00:37:10.913809 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-10 00:37:10.914314 | orchestrator | Thursday 10 April 2025 00:37:10 +0000 (0:00:00.042) 0:06:58.325 ******** 2025-04-10 00:37:12.043096 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:12.044584 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:12.045688 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:12.046663 | orchestrator | 2025-04-10 00:37:12.049775 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-10 00:37:12.050181 | orchestrator | Thursday 10 April 2025 00:37:12 +0000 (0:00:01.148) 0:06:59.473 ******** 2025-04-10 00:37:13.646461 | orchestrator | changed: [testbed-manager] 2025-04-10 00:37:13.646642 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:13.647362 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:13.647433 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:13.647665 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:13.648629 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:13.648827 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:13.649094 | orchestrator | 2025-04-10 00:37:13.649279 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-10 00:37:13.652394 | orchestrator | Thursday 10 April 2025 00:37:13 +0000 (0:00:01.604) 0:07:01.077 ******** 2025-04-10 00:37:14.792376 | orchestrator | changed: [testbed-manager] 2025-04-10 00:37:14.793144 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:14.793838 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:14.801091 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:14.801263 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:14.801841 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:14.802011 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:14.802825 | orchestrator | 2025-04-10 00:37:14.803325 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-10 00:37:14.804023 | orchestrator | Thursday 10 April 2025 00:37:14 +0000 (0:00:01.145) 0:07:02.222 ******** 2025-04-10 00:37:14.932362 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:16.794723 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:16.794908 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:16.796057 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:16.797272 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:16.798497 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:16.798786 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:16.799595 | orchestrator | 2025-04-10 00:37:16.800286 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-10 00:37:16.800922 | orchestrator | Thursday 10 April 2025 00:37:16 +0000 (0:00:02.002) 0:07:04.225 ******** 2025-04-10 00:37:16.895216 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:16.898519 | orchestrator | 2025-04-10 00:37:16.898563 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-10 00:37:17.921851 | orchestrator | Thursday 10 April 2025 00:37:16 +0000 (0:00:00.101) 0:07:04.326 ******** 2025-04-10 00:37:17.922103 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:17.922271 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:17.922322 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:17.922385 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:17.923079 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:17.923710 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:17.924161 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:17.924188 | orchestrator | 2025-04-10 00:37:17.924713 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-10 00:37:17.924844 | orchestrator | Thursday 10 April 2025 00:37:17 +0000 (0:00:01.024) 0:07:05.351 ******** 2025-04-10 00:37:18.070260 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:18.137607 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:18.224108 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:18.490885 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:18.560140 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:18.683203 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:18.684239 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:18.684664 | orchestrator | 2025-04-10 00:37:18.688419 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-10 00:37:19.598318 | orchestrator | Thursday 10 April 2025 00:37:18 +0000 (0:00:00.763) 0:07:06.115 ******** 2025-04-10 00:37:19.598459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:37:19.599268 | orchestrator | 2025-04-10 00:37:19.600667 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-10 00:37:19.601382 | orchestrator | Thursday 10 April 2025 00:37:19 +0000 (0:00:00.916) 0:07:07.031 ******** 2025-04-10 00:37:20.019071 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:20.447842 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:20.448964 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:20.450775 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:20.451782 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:20.452409 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:20.453105 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:20.454207 | orchestrator | 2025-04-10 00:37:20.454569 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-10 00:37:20.455141 | orchestrator | Thursday 10 April 2025 00:37:20 +0000 (0:00:00.850) 0:07:07.882 ******** 2025-04-10 00:37:23.122371 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-10 00:37:23.122553 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-10 00:37:23.123208 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-10 00:37:23.124303 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-10 00:37:23.125527 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-10 00:37:23.127194 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-10 00:37:23.127258 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-10 00:37:23.127277 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-10 00:37:23.127297 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-10 00:37:23.128045 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-10 00:37:23.128492 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-10 00:37:23.129357 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-10 00:37:23.130357 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-10 00:37:23.130807 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-10 00:37:23.131868 | orchestrator | 2025-04-10 00:37:23.131966 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-10 00:37:23.132847 | orchestrator | Thursday 10 April 2025 00:37:23 +0000 (0:00:02.672) 0:07:10.554 ******** 2025-04-10 00:37:23.286829 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:23.355274 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:23.442363 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:23.511657 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:23.582415 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:23.686554 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:23.687482 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:23.687876 | orchestrator | 2025-04-10 00:37:23.689053 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-10 00:37:23.689668 | orchestrator | Thursday 10 April 2025 00:37:23 +0000 (0:00:00.566) 0:07:11.120 ******** 2025-04-10 00:37:24.551370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:37:24.551655 | orchestrator | 2025-04-10 00:37:24.552367 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-10 00:37:24.553063 | orchestrator | Thursday 10 April 2025 00:37:24 +0000 (0:00:00.862) 0:07:11.983 ******** 2025-04-10 00:37:25.011411 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:25.632476 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:25.634448 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:25.635450 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:25.635501 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:25.637502 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:25.638709 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:25.639537 | orchestrator | 2025-04-10 00:37:25.640179 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-10 00:37:25.641371 | orchestrator | Thursday 10 April 2025 00:37:25 +0000 (0:00:01.080) 0:07:13.064 ******** 2025-04-10 00:37:26.078450 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:26.465430 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:26.466514 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:26.466567 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:26.469247 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:26.469289 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:26.470298 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:26.470336 | orchestrator | 2025-04-10 00:37:26.470362 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-10 00:37:26.470385 | orchestrator | Thursday 10 April 2025 00:37:26 +0000 (0:00:00.832) 0:07:13.896 ******** 2025-04-10 00:37:26.616259 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:26.681770 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:26.758400 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:26.824742 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:26.893304 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:26.995519 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:26.995724 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:26.996781 | orchestrator | 2025-04-10 00:37:26.996905 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-10 00:37:26.997158 | orchestrator | Thursday 10 April 2025 00:37:26 +0000 (0:00:00.532) 0:07:14.428 ******** 2025-04-10 00:37:28.409640 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:28.410178 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:28.411320 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:28.412661 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:28.414376 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:28.414942 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:28.415772 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:28.416495 | orchestrator | 2025-04-10 00:37:28.417019 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-10 00:37:28.417843 | orchestrator | Thursday 10 April 2025 00:37:28 +0000 (0:00:01.414) 0:07:15.843 ******** 2025-04-10 00:37:28.540118 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:28.619381 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:28.680808 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:28.750487 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:28.828312 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:28.968501 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:28.968729 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:28.969422 | orchestrator | 2025-04-10 00:37:28.970264 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-10 00:37:28.971267 | orchestrator | Thursday 10 April 2025 00:37:28 +0000 (0:00:00.556) 0:07:16.400 ******** 2025-04-10 00:37:31.009036 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:31.009534 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:31.011098 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:31.013437 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:31.015482 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:31.015524 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:31.016345 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:31.017070 | orchestrator | 2025-04-10 00:37:31.018161 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-10 00:37:31.019000 | orchestrator | Thursday 10 April 2025 00:37:30 +0000 (0:00:02.039) 0:07:18.439 ******** 2025-04-10 00:37:32.379809 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:32.380055 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:32.380362 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:32.381182 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:32.382479 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:32.383028 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:32.383622 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:32.384394 | orchestrator | 2025-04-10 00:37:32.384932 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-10 00:37:32.385304 | orchestrator | Thursday 10 April 2025 00:37:32 +0000 (0:00:01.374) 0:07:19.813 ******** 2025-04-10 00:37:34.181829 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:34.182128 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:34.182685 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:34.183424 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:34.185729 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:34.186378 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:34.187433 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:34.187935 | orchestrator | 2025-04-10 00:37:34.188906 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-10 00:37:34.189531 | orchestrator | Thursday 10 April 2025 00:37:34 +0000 (0:00:01.796) 0:07:21.610 ******** 2025-04-10 00:37:35.974899 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:35.975186 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:37:35.976447 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:37:35.978170 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:37:35.979139 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:37:35.980831 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:37:35.981914 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:37:35.984534 | orchestrator | 2025-04-10 00:37:35.986609 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-10 00:37:35.986761 | orchestrator | Thursday 10 April 2025 00:37:35 +0000 (0:00:01.793) 0:07:23.403 ******** 2025-04-10 00:37:36.758629 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:36.842967 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:37.291668 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:37.292550 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:37.292593 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:37.297143 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:37.302069 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:37.303313 | orchestrator | 2025-04-10 00:37:37.303345 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-10 00:37:37.303749 | orchestrator | Thursday 10 April 2025 00:37:37 +0000 (0:00:01.319) 0:07:24.723 ******** 2025-04-10 00:37:37.448688 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:37.527449 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:37.600927 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:37.670892 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:37.742415 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:38.189732 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:38.190505 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:38.190548 | orchestrator | 2025-04-10 00:37:38.191391 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-10 00:37:38.192101 | orchestrator | Thursday 10 April 2025 00:37:38 +0000 (0:00:00.898) 0:07:25.622 ******** 2025-04-10 00:37:38.340517 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:38.420655 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:38.521660 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:38.589128 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:38.661923 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:38.780707 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:38.781265 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:38.781671 | orchestrator | 2025-04-10 00:37:38.782303 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-10 00:37:38.783302 | orchestrator | Thursday 10 April 2025 00:37:38 +0000 (0:00:00.592) 0:07:26.214 ******** 2025-04-10 00:37:38.925812 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:38.998254 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:39.064200 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:39.152526 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:39.228485 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:39.361490 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:39.362651 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:39.363110 | orchestrator | 2025-04-10 00:37:39.367240 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-10 00:37:39.712490 | orchestrator | Thursday 10 April 2025 00:37:39 +0000 (0:00:00.577) 0:07:26.792 ******** 2025-04-10 00:37:39.712614 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:39.777857 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:39.845079 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:39.920480 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:39.989402 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:40.098808 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:40.099278 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:40.103484 | orchestrator | 2025-04-10 00:37:40.103725 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-10 00:37:40.103756 | orchestrator | Thursday 10 April 2025 00:37:40 +0000 (0:00:00.738) 0:07:27.530 ******** 2025-04-10 00:37:40.239895 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:40.307357 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:40.383758 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:40.449136 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:40.529469 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:40.663972 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:40.664274 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:40.665450 | orchestrator | 2025-04-10 00:37:40.666472 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-10 00:37:40.667485 | orchestrator | Thursday 10 April 2025 00:37:40 +0000 (0:00:00.567) 0:07:28.097 ******** 2025-04-10 00:37:46.356439 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:46.357174 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:46.357915 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:46.359372 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:46.360066 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:46.361040 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:46.362185 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:46.362274 | orchestrator | 2025-04-10 00:37:46.363236 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-10 00:37:46.364055 | orchestrator | Thursday 10 April 2025 00:37:46 +0000 (0:00:05.692) 0:07:33.789 ******** 2025-04-10 00:37:46.506900 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:37:46.575100 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:37:46.652573 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:37:46.740865 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:37:46.840646 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:37:46.960857 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:37:46.961909 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:37:46.961967 | orchestrator | 2025-04-10 00:37:46.962703 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-10 00:37:46.963081 | orchestrator | Thursday 10 April 2025 00:37:46 +0000 (0:00:00.603) 0:07:34.393 ******** 2025-04-10 00:37:48.075777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:37:48.075951 | orchestrator | 2025-04-10 00:37:48.079328 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-10 00:37:49.851083 | orchestrator | Thursday 10 April 2025 00:37:48 +0000 (0:00:01.112) 0:07:35.506 ******** 2025-04-10 00:37:49.851214 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:49.851335 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:49.853031 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:49.854096 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:49.854166 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:49.856867 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:49.857442 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:49.857882 | orchestrator | 2025-04-10 00:37:49.858393 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-10 00:37:49.859062 | orchestrator | Thursday 10 April 2025 00:37:49 +0000 (0:00:01.775) 0:07:37.282 ******** 2025-04-10 00:37:51.009449 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:51.009921 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:51.011162 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:51.011869 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:51.013055 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:51.013535 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:51.014400 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:51.015273 | orchestrator | 2025-04-10 00:37:51.015793 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-10 00:37:51.017182 | orchestrator | Thursday 10 April 2025 00:37:51 +0000 (0:00:01.160) 0:07:38.443 ******** 2025-04-10 00:37:51.914443 | orchestrator | ok: [testbed-manager] 2025-04-10 00:37:51.914761 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:37:51.916296 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:37:51.917444 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:37:51.918730 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:37:51.919937 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:37:51.920750 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:37:51.921355 | orchestrator | 2025-04-10 00:37:51.922501 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-10 00:37:51.924235 | orchestrator | Thursday 10 April 2025 00:37:51 +0000 (0:00:00.902) 0:07:39.345 ******** 2025-04-10 00:37:53.877598 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.877830 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.882337 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.882628 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.882659 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.882678 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.883825 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-10 00:37:53.884020 | orchestrator | 2025-04-10 00:37:53.884499 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-10 00:37:53.885178 | orchestrator | Thursday 10 April 2025 00:37:53 +0000 (0:00:01.963) 0:07:41.309 ******** 2025-04-10 00:37:54.722671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:37:54.722928 | orchestrator | 2025-04-10 00:37:54.723404 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-10 00:37:54.723444 | orchestrator | Thursday 10 April 2025 00:37:54 +0000 (0:00:00.845) 0:07:42.155 ******** 2025-04-10 00:38:03.718411 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:03.718652 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:03.719532 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:03.719972 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:03.720873 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:03.721464 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:03.722163 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:03.724431 | orchestrator | 2025-04-10 00:38:03.725152 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-10 00:38:03.725702 | orchestrator | Thursday 10 April 2025 00:38:03 +0000 (0:00:08.995) 0:07:51.150 ******** 2025-04-10 00:38:05.542600 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:05.543114 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:05.543460 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:05.544267 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:05.545947 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:05.546355 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:05.546388 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:05.547222 | orchestrator | 2025-04-10 00:38:05.548316 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-10 00:38:05.549247 | orchestrator | Thursday 10 April 2025 00:38:05 +0000 (0:00:01.823) 0:07:52.973 ******** 2025-04-10 00:38:06.874635 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:06.874926 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:06.877325 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:06.878328 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:06.878826 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:06.879950 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:06.881828 | orchestrator | 2025-04-10 00:38:06.883348 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-10 00:38:06.884226 | orchestrator | Thursday 10 April 2025 00:38:06 +0000 (0:00:01.330) 0:07:54.304 ******** 2025-04-10 00:38:08.347081 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:08.347372 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:08.349085 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:08.349152 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:08.351269 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:08.351777 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:08.352947 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:08.353848 | orchestrator | 2025-04-10 00:38:08.355094 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-10 00:38:08.355294 | orchestrator | 2025-04-10 00:38:08.356860 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-10 00:38:08.495836 | orchestrator | Thursday 10 April 2025 00:38:08 +0000 (0:00:01.473) 0:07:55.778 ******** 2025-04-10 00:38:08.495967 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:38:08.563101 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:38:08.650614 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:38:08.714145 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:38:08.776417 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:38:08.913054 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:38:08.913264 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:38:08.914312 | orchestrator | 2025-04-10 00:38:08.915028 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-10 00:38:08.915651 | orchestrator | 2025-04-10 00:38:08.916239 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-10 00:38:08.919106 | orchestrator | Thursday 10 April 2025 00:38:08 +0000 (0:00:00.569) 0:07:56.348 ******** 2025-04-10 00:38:10.310194 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:10.310393 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:10.311114 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:10.312432 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:10.313119 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:10.313912 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:10.314875 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:10.315377 | orchestrator | 2025-04-10 00:38:10.316357 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-10 00:38:10.316696 | orchestrator | Thursday 10 April 2025 00:38:10 +0000 (0:00:01.392) 0:07:57.740 ******** 2025-04-10 00:38:11.748490 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:11.748679 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:11.752824 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:11.754363 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:11.754795 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:11.757592 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:11.757799 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:11.758936 | orchestrator | 2025-04-10 00:38:11.759764 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-10 00:38:11.760820 | orchestrator | Thursday 10 April 2025 00:38:11 +0000 (0:00:01.437) 0:07:59.178 ******** 2025-04-10 00:38:11.911462 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:38:12.202125 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:38:12.267392 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:38:12.372051 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:38:12.443826 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:38:12.846293 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:38:12.847059 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:38:12.847100 | orchestrator | 2025-04-10 00:38:12.847499 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-10 00:38:12.853444 | orchestrator | Thursday 10 April 2025 00:38:12 +0000 (0:00:01.099) 0:08:00.277 ******** 2025-04-10 00:38:14.095471 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:14.096063 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:14.097051 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:14.098343 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:14.099817 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:14.100713 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:14.101895 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:14.102559 | orchestrator | 2025-04-10 00:38:14.103766 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-10 00:38:14.104088 | orchestrator | 2025-04-10 00:38:14.104826 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-10 00:38:14.106171 | orchestrator | Thursday 10 April 2025 00:38:14 +0000 (0:00:01.251) 0:08:01.528 ******** 2025-04-10 00:38:15.125424 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:38:15.125689 | orchestrator | 2025-04-10 00:38:15.128715 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-10 00:38:15.637801 | orchestrator | Thursday 10 April 2025 00:38:15 +0000 (0:00:01.028) 0:08:02.557 ******** 2025-04-10 00:38:15.637935 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:16.065731 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:16.066302 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:16.067297 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:16.067339 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:16.067840 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:16.068370 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:16.071733 | orchestrator | 2025-04-10 00:38:16.072156 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-10 00:38:17.228563 | orchestrator | Thursday 10 April 2025 00:38:16 +0000 (0:00:00.939) 0:08:03.496 ******** 2025-04-10 00:38:17.228732 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:17.228817 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:17.228842 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:17.230226 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:17.230799 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:17.232265 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:17.232771 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:17.236512 | orchestrator | 2025-04-10 00:38:18.310128 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-10 00:38:18.310247 | orchestrator | Thursday 10 April 2025 00:38:17 +0000 (0:00:01.164) 0:08:04.660 ******** 2025-04-10 00:38:18.310281 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:38:18.313490 | orchestrator | 2025-04-10 00:38:18.313857 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-10 00:38:18.314765 | orchestrator | Thursday 10 April 2025 00:38:18 +0000 (0:00:01.081) 0:08:05.742 ******** 2025-04-10 00:38:19.166465 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:19.167213 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:19.167677 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:19.169033 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:19.169197 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:19.170127 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:19.170753 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:19.170785 | orchestrator | 2025-04-10 00:38:19.172395 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-10 00:38:19.172597 | orchestrator | Thursday 10 April 2025 00:38:19 +0000 (0:00:00.858) 0:08:06.600 ******** 2025-04-10 00:38:19.613693 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:20.345333 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:20.345762 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:20.345804 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:20.346243 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:20.347133 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:20.348309 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:20.348926 | orchestrator | 2025-04-10 00:38:20.350120 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:38:20.350243 | orchestrator | 2025-04-10 00:38:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:38:20.350456 | orchestrator | 2025-04-10 00:38:20 | INFO  | Please wait and do not abort execution. 2025-04-10 00:38:20.351367 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-10 00:38:20.352130 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-10 00:38:20.352666 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-10 00:38:20.353276 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-10 00:38:20.354723 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-10 00:38:20.355154 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-10 00:38:20.355590 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-10 00:38:20.355856 | orchestrator | 2025-04-10 00:38:20.356166 | orchestrator | Thursday 10 April 2025 00:38:20 +0000 (0:00:01.177) 0:08:07.778 ******** 2025-04-10 00:38:20.356587 | orchestrator | =============================================================================== 2025-04-10 00:38:20.357121 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.27s 2025-04-10 00:38:20.357374 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.37s 2025-04-10 00:38:20.357737 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.88s 2025-04-10 00:38:20.358193 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.29s 2025-04-10 00:38:20.358596 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.18s 2025-04-10 00:38:20.359548 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.85s 2025-04-10 00:38:20.359936 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.41s 2025-04-10 00:38:20.361032 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.06s 2025-04-10 00:38:20.361070 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.67s 2025-04-10 00:38:20.361637 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.00s 2025-04-10 00:38:20.362069 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.66s 2025-04-10 00:38:20.362350 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.66s 2025-04-10 00:38:20.362971 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.56s 2025-04-10 00:38:20.363661 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.41s 2025-04-10 00:38:20.364142 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.24s 2025-04-10 00:38:20.364556 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 6.82s 2025-04-10 00:38:20.364882 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.12s 2025-04-10 00:38:20.365379 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.87s 2025-04-10 00:38:20.365743 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.80s 2025-04-10 00:38:20.366133 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.80s 2025-04-10 00:38:21.139590 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-10 00:38:23.322882 | orchestrator | + osism apply network 2025-04-10 00:38:23.323070 | orchestrator | 2025-04-10 00:38:23 | INFO  | Task 0e403ce7-e248-4c8a-91d6-49fba2cd9805 (network) was prepared for execution. 2025-04-10 00:38:23.323166 | orchestrator | 2025-04-10 00:38:23 | INFO  | It takes a moment until task 0e403ce7-e248-4c8a-91d6-49fba2cd9805 (network) has been started and output is visible here. 2025-04-10 00:38:26.764098 | orchestrator | 2025-04-10 00:38:26.764382 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-10 00:38:26.764598 | orchestrator | 2025-04-10 00:38:26.765437 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-10 00:38:26.767398 | orchestrator | Thursday 10 April 2025 00:38:26 +0000 (0:00:00.231) 0:00:00.231 ******** 2025-04-10 00:38:26.938513 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:27.010711 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:27.088261 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:27.166288 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:27.243336 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:27.493876 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:27.494803 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:27.494886 | orchestrator | 2025-04-10 00:38:27.496017 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-10 00:38:27.496177 | orchestrator | Thursday 10 April 2025 00:38:27 +0000 (0:00:00.731) 0:00:00.962 ******** 2025-04-10 00:38:28.742515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:38:28.743024 | orchestrator | 2025-04-10 00:38:28.744270 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-10 00:38:28.747037 | orchestrator | Thursday 10 April 2025 00:38:28 +0000 (0:00:01.247) 0:00:02.209 ******** 2025-04-10 00:38:30.662614 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:30.662920 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:30.663669 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:30.664568 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:30.666306 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:30.667355 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:30.667565 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:30.668276 | orchestrator | 2025-04-10 00:38:30.668836 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-10 00:38:30.669263 | orchestrator | Thursday 10 April 2025 00:38:30 +0000 (0:00:01.923) 0:00:04.133 ******** 2025-04-10 00:38:32.452749 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:32.453352 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:32.455101 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:32.456736 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:32.457650 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:32.458495 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:32.459507 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:32.460616 | orchestrator | 2025-04-10 00:38:32.461859 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-10 00:38:32.462363 | orchestrator | Thursday 10 April 2025 00:38:32 +0000 (0:00:01.783) 0:00:05.916 ******** 2025-04-10 00:38:33.629561 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-10 00:38:33.630671 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-10 00:38:33.630788 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-10 00:38:33.631564 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-10 00:38:33.634444 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-10 00:38:33.635109 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-10 00:38:33.637144 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-10 00:38:33.637326 | orchestrator | 2025-04-10 00:38:33.637351 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-10 00:38:33.637371 | orchestrator | Thursday 10 April 2025 00:38:33 +0000 (0:00:01.179) 0:00:07.096 ******** 2025-04-10 00:38:35.418526 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-10 00:38:35.418756 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-10 00:38:35.419207 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 00:38:35.420818 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 00:38:35.425107 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-10 00:38:35.428094 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-10 00:38:35.428169 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-10 00:38:35.428726 | orchestrator | 2025-04-10 00:38:35.429540 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-10 00:38:35.430440 | orchestrator | Thursday 10 April 2025 00:38:35 +0000 (0:00:01.791) 0:00:08.887 ******** 2025-04-10 00:38:37.155475 | orchestrator | changed: [testbed-manager] 2025-04-10 00:38:37.159283 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:37.160611 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:37.160641 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:37.160662 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:37.161875 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:37.162456 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:37.163200 | orchestrator | 2025-04-10 00:38:37.164058 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-10 00:38:37.164686 | orchestrator | Thursday 10 April 2025 00:38:37 +0000 (0:00:01.732) 0:00:10.620 ******** 2025-04-10 00:38:37.652640 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 00:38:38.304046 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-10 00:38:38.305186 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 00:38:38.306446 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-10 00:38:38.306959 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-10 00:38:38.308193 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-10 00:38:38.309407 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-10 00:38:38.310290 | orchestrator | 2025-04-10 00:38:38.313360 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-10 00:38:38.314157 | orchestrator | Thursday 10 April 2025 00:38:38 +0000 (0:00:01.154) 0:00:11.774 ******** 2025-04-10 00:38:38.792831 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:38.877061 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:39.460583 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:39.461206 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:39.466101 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:39.466490 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:39.466521 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:39.467848 | orchestrator | 2025-04-10 00:38:39.468141 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-10 00:38:39.468676 | orchestrator | Thursday 10 April 2025 00:38:39 +0000 (0:00:01.153) 0:00:12.928 ******** 2025-04-10 00:38:39.623629 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:38:39.702908 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:38:39.781160 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:38:39.861347 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:38:39.935572 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:38:40.264821 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:38:40.265297 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:38:40.265682 | orchestrator | 2025-04-10 00:38:40.266419 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-10 00:38:40.267152 | orchestrator | Thursday 10 April 2025 00:38:40 +0000 (0:00:00.805) 0:00:13.733 ******** 2025-04-10 00:38:42.258386 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:42.258959 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:42.259210 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:42.259445 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:42.260491 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:42.260871 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:42.262126 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:42.263626 | orchestrator | 2025-04-10 00:38:42.264196 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-10 00:38:42.265061 | orchestrator | Thursday 10 April 2025 00:38:42 +0000 (0:00:01.989) 0:00:15.723 ******** 2025-04-10 00:38:44.126260 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-10 00:38:44.126883 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.128537 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.128603 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.129709 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.131143 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.132093 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.132123 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-10 00:38:44.132173 | orchestrator | 2025-04-10 00:38:44.132592 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-10 00:38:44.133025 | orchestrator | Thursday 10 April 2025 00:38:44 +0000 (0:00:01.863) 0:00:17.586 ******** 2025-04-10 00:38:45.654386 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:45.655998 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:38:45.657930 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:38:45.658311 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:38:45.660155 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:38:45.661399 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:38:45.661795 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:38:45.662693 | orchestrator | 2025-04-10 00:38:45.664926 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-10 00:38:47.214683 | orchestrator | Thursday 10 April 2025 00:38:45 +0000 (0:00:01.536) 0:00:19.122 ******** 2025-04-10 00:38:47.214804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:38:47.214881 | orchestrator | 2025-04-10 00:38:47.214897 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-10 00:38:47.215646 | orchestrator | Thursday 10 April 2025 00:38:47 +0000 (0:00:01.558) 0:00:20.681 ******** 2025-04-10 00:38:47.771942 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:48.191778 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:48.194161 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:48.195300 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:48.195339 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:48.196727 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:48.196758 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:48.197440 | orchestrator | 2025-04-10 00:38:48.198436 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-10 00:38:48.199717 | orchestrator | Thursday 10 April 2025 00:38:48 +0000 (0:00:00.979) 0:00:21.661 ******** 2025-04-10 00:38:48.360105 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:48.440804 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:38:48.712370 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:38:48.810491 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:38:48.898403 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:38:49.025612 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:38:49.026308 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:38:49.027740 | orchestrator | 2025-04-10 00:38:49.028132 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-10 00:38:49.029136 | orchestrator | Thursday 10 April 2025 00:38:49 +0000 (0:00:00.832) 0:00:22.493 ******** 2025-04-10 00:38:49.384175 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:49.490611 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:49.490746 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:49.580536 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:49.580662 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:49.581084 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:50.104918 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:50.106330 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:50.108116 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:50.108848 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:50.108911 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:50.109549 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:50.110882 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-10 00:38:50.111909 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-10 00:38:50.113093 | orchestrator | 2025-04-10 00:38:50.114122 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-10 00:38:50.114386 | orchestrator | Thursday 10 April 2025 00:38:50 +0000 (0:00:01.080) 0:00:23.574 ******** 2025-04-10 00:38:50.472675 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:38:50.554349 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:38:50.638571 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:38:50.727123 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:38:50.830421 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:38:52.035520 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:38:52.035751 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:38:52.036817 | orchestrator | 2025-04-10 00:38:52.038292 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-10 00:38:52.039267 | orchestrator | Thursday 10 April 2025 00:38:52 +0000 (0:00:01.926) 0:00:25.500 ******** 2025-04-10 00:38:52.190877 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:38:52.273831 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:38:52.543775 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:38:52.627803 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:38:52.727163 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:38:52.766589 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:38:52.766908 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:38:52.767678 | orchestrator | 2025-04-10 00:38:52.768940 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:38:52.769786 | orchestrator | 2025-04-10 00:38:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:38:52.770195 | orchestrator | 2025-04-10 00:38:52 | INFO  | Please wait and do not abort execution. 2025-04-10 00:38:52.773884 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.774895 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.776322 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.776689 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.776949 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.777151 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.777802 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:38:52.778085 | orchestrator | 2025-04-10 00:38:52.778226 | orchestrator | Thursday 10 April 2025 00:38:52 +0000 (0:00:00.736) 0:00:26.237 ******** 2025-04-10 00:38:52.778866 | orchestrator | =============================================================================== 2025-04-10 00:38:52.778963 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.99s 2025-04-10 00:38:52.779176 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.93s 2025-04-10 00:38:52.779596 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.92s 2025-04-10 00:38:52.780145 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.86s 2025-04-10 00:38:52.780458 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.79s 2025-04-10 00:38:52.780933 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.78s 2025-04-10 00:38:52.781284 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.73s 2025-04-10 00:38:52.784298 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.56s 2025-04-10 00:38:52.785283 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.54s 2025-04-10 00:38:52.785328 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2025-04-10 00:38:52.785609 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2025-04-10 00:38:52.785924 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.15s 2025-04-10 00:38:52.786152 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-04-10 00:38:52.786449 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.08s 2025-04-10 00:38:52.787113 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-04-10 00:38:52.787191 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.83s 2025-04-10 00:38:52.787399 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.81s 2025-04-10 00:38:52.787671 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.74s 2025-04-10 00:38:52.788354 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.73s 2025-04-10 00:38:53.382255 | orchestrator | + osism apply wireguard 2025-04-10 00:38:54.888568 | orchestrator | 2025-04-10 00:38:54 | INFO  | Task 68b25c64-81e1-4c29-931f-1a18f00556ce (wireguard) was prepared for execution. 2025-04-10 00:38:54.888836 | orchestrator | 2025-04-10 00:38:54 | INFO  | It takes a moment until task 68b25c64-81e1-4c29-931f-1a18f00556ce (wireguard) has been started and output is visible here. 2025-04-10 00:38:58.194312 | orchestrator | 2025-04-10 00:38:58.199150 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-10 00:38:58.199836 | orchestrator | 2025-04-10 00:38:58.201045 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-10 00:38:58.202610 | orchestrator | Thursday 10 April 2025 00:38:58 +0000 (0:00:00.185) 0:00:00.185 ******** 2025-04-10 00:38:59.813536 | orchestrator | ok: [testbed-manager] 2025-04-10 00:38:59.813813 | orchestrator | 2025-04-10 00:38:59.814325 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-10 00:38:59.814989 | orchestrator | Thursday 10 April 2025 00:38:59 +0000 (0:00:01.622) 0:00:01.808 ******** 2025-04-10 00:39:06.669565 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:06.670227 | orchestrator | 2025-04-10 00:39:06.670923 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-10 00:39:06.672053 | orchestrator | Thursday 10 April 2025 00:39:06 +0000 (0:00:06.855) 0:00:08.664 ******** 2025-04-10 00:39:07.244767 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:07.245901 | orchestrator | 2025-04-10 00:39:07.247196 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-10 00:39:07.247888 | orchestrator | Thursday 10 April 2025 00:39:07 +0000 (0:00:00.574) 0:00:09.239 ******** 2025-04-10 00:39:07.672243 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:07.672882 | orchestrator | 2025-04-10 00:39:07.674233 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-10 00:39:07.676251 | orchestrator | Thursday 10 April 2025 00:39:07 +0000 (0:00:00.427) 0:00:09.666 ******** 2025-04-10 00:39:08.177221 | orchestrator | ok: [testbed-manager] 2025-04-10 00:39:08.178167 | orchestrator | 2025-04-10 00:39:08.178573 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-10 00:39:08.178604 | orchestrator | Thursday 10 April 2025 00:39:08 +0000 (0:00:00.507) 0:00:10.173 ******** 2025-04-10 00:39:08.790546 | orchestrator | ok: [testbed-manager] 2025-04-10 00:39:08.790734 | orchestrator | 2025-04-10 00:39:08.791032 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-10 00:39:08.791423 | orchestrator | Thursday 10 April 2025 00:39:08 +0000 (0:00:00.604) 0:00:10.777 ******** 2025-04-10 00:39:09.225741 | orchestrator | ok: [testbed-manager] 2025-04-10 00:39:09.226179 | orchestrator | 2025-04-10 00:39:09.226588 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-10 00:39:09.227747 | orchestrator | Thursday 10 April 2025 00:39:09 +0000 (0:00:00.442) 0:00:11.220 ******** 2025-04-10 00:39:10.456843 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:10.458154 | orchestrator | 2025-04-10 00:39:10.458209 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-10 00:39:10.459018 | orchestrator | Thursday 10 April 2025 00:39:10 +0000 (0:00:01.228) 0:00:12.449 ******** 2025-04-10 00:39:11.432317 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-10 00:39:11.432637 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:11.433339 | orchestrator | 2025-04-10 00:39:11.434627 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-10 00:39:11.435027 | orchestrator | Thursday 10 April 2025 00:39:11 +0000 (0:00:00.976) 0:00:13.425 ******** 2025-04-10 00:39:13.208655 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:13.210134 | orchestrator | 2025-04-10 00:39:13.210181 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-10 00:39:13.210746 | orchestrator | Thursday 10 April 2025 00:39:13 +0000 (0:00:01.777) 0:00:15.203 ******** 2025-04-10 00:39:14.131701 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:14.131818 | orchestrator | 2025-04-10 00:39:14.131835 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:39:14.132455 | orchestrator | 2025-04-10 00:39:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:39:14.134491 | orchestrator | 2025-04-10 00:39:14 | INFO  | Please wait and do not abort execution. 2025-04-10 00:39:14.134540 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:39:14.135200 | orchestrator | 2025-04-10 00:39:14.135351 | orchestrator | Thursday 10 April 2025 00:39:14 +0000 (0:00:00.924) 0:00:16.127 ******** 2025-04-10 00:39:14.136306 | orchestrator | =============================================================================== 2025-04-10 00:39:14.136622 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.86s 2025-04-10 00:39:14.137260 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.78s 2025-04-10 00:39:14.137627 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.62s 2025-04-10 00:39:14.138115 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2025-04-10 00:39:14.138859 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.98s 2025-04-10 00:39:14.139414 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-04-10 00:39:14.140150 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.60s 2025-04-10 00:39:14.140699 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-04-10 00:39:14.141099 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-04-10 00:39:14.141813 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-04-10 00:39:14.142339 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-04-10 00:39:14.767113 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-10 00:39:14.805536 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-10 00:39:14.903423 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-10 00:39:14.903585 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 153 0 --:--:-- --:--:-- --:--:-- 154 2025-04-10 00:39:14.918088 | orchestrator | + osism apply --environment custom workarounds 2025-04-10 00:39:16.413036 | orchestrator | 2025-04-10 00:39:16 | INFO  | Trying to run play workarounds in environment custom 2025-04-10 00:39:16.468776 | orchestrator | 2025-04-10 00:39:16 | INFO  | Task 3ef876e0-c54c-414b-be75-3d121d4c5955 (workarounds) was prepared for execution. 2025-04-10 00:39:19.699170 | orchestrator | 2025-04-10 00:39:16 | INFO  | It takes a moment until task 3ef876e0-c54c-414b-be75-3d121d4c5955 (workarounds) has been started and output is visible here. 2025-04-10 00:39:19.699391 | orchestrator | 2025-04-10 00:39:19.699502 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:39:19.703151 | orchestrator | 2025-04-10 00:39:19.703988 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-10 00:39:19.704942 | orchestrator | Thursday 10 April 2025 00:39:19 +0000 (0:00:00.144) 0:00:00.144 ******** 2025-04-10 00:39:19.869582 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-10 00:39:19.957282 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-10 00:39:20.047522 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-10 00:39:20.135695 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-10 00:39:20.224507 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-10 00:39:20.503601 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-10 00:39:20.503815 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-10 00:39:20.504290 | orchestrator | 2025-04-10 00:39:20.507429 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-10 00:39:20.508153 | orchestrator | 2025-04-10 00:39:20.508781 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-10 00:39:20.510389 | orchestrator | Thursday 10 April 2025 00:39:20 +0000 (0:00:00.804) 0:00:00.949 ******** 2025-04-10 00:39:23.329614 | orchestrator | ok: [testbed-manager] 2025-04-10 00:39:23.330505 | orchestrator | 2025-04-10 00:39:23.331413 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-10 00:39:23.334713 | orchestrator | 2025-04-10 00:39:23.337456 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-10 00:39:23.342172 | orchestrator | Thursday 10 April 2025 00:39:23 +0000 (0:00:02.823) 0:00:03.772 ******** 2025-04-10 00:39:25.159137 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:39:25.162943 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:39:25.164678 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:39:25.164696 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:39:25.164708 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:39:25.166065 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:39:25.166084 | orchestrator | 2025-04-10 00:39:25.166891 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-10 00:39:25.167238 | orchestrator | 2025-04-10 00:39:25.167723 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-10 00:39:25.168279 | orchestrator | Thursday 10 April 2025 00:39:25 +0000 (0:00:01.831) 0:00:05.603 ******** 2025-04-10 00:39:26.667809 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-10 00:39:26.668118 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-10 00:39:26.668159 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-10 00:39:26.668944 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-10 00:39:26.669861 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-10 00:39:26.671536 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-10 00:39:26.672042 | orchestrator | 2025-04-10 00:39:26.672271 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-10 00:39:26.672771 | orchestrator | Thursday 10 April 2025 00:39:26 +0000 (0:00:01.507) 0:00:07.111 ******** 2025-04-10 00:39:30.419122 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:39:30.419682 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:39:30.421147 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:39:30.421999 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:39:30.423927 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:39:30.424715 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:39:30.425255 | orchestrator | 2025-04-10 00:39:30.426393 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-10 00:39:30.427015 | orchestrator | Thursday 10 April 2025 00:39:30 +0000 (0:00:03.754) 0:00:10.865 ******** 2025-04-10 00:39:30.582940 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:39:30.664436 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:39:30.745956 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:39:31.009454 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:39:31.149456 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:39:31.149927 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:39:31.151103 | orchestrator | 2025-04-10 00:39:31.157998 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-10 00:39:32.885674 | orchestrator | 2025-04-10 00:39:32.885803 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-10 00:39:32.885842 | orchestrator | Thursday 10 April 2025 00:39:31 +0000 (0:00:00.729) 0:00:11.595 ******** 2025-04-10 00:39:32.885886 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:32.886009 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:39:32.886493 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:39:32.886895 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:39:32.888949 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:39:32.889278 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:39:32.890103 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:39:32.890817 | orchestrator | 2025-04-10 00:39:32.891773 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-10 00:39:32.892124 | orchestrator | Thursday 10 April 2025 00:39:32 +0000 (0:00:01.736) 0:00:13.331 ******** 2025-04-10 00:39:34.547921 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:34.551356 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:39:34.553281 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:39:34.553313 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:39:34.553328 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:39:34.553349 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:39:34.554768 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:39:34.554803 | orchestrator | 2025-04-10 00:39:34.554937 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-10 00:39:34.558078 | orchestrator | Thursday 10 April 2025 00:39:34 +0000 (0:00:01.658) 0:00:14.990 ******** 2025-04-10 00:39:36.078083 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:39:36.078584 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:39:36.081193 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:39:36.081886 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:39:36.081909 | orchestrator | ok: [testbed-manager] 2025-04-10 00:39:36.083026 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:39:36.083935 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:39:36.084080 | orchestrator | 2025-04-10 00:39:36.084958 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-10 00:39:36.085316 | orchestrator | Thursday 10 April 2025 00:39:36 +0000 (0:00:01.533) 0:00:16.523 ******** 2025-04-10 00:39:37.867210 | orchestrator | changed: [testbed-manager] 2025-04-10 00:39:37.870438 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:39:37.871093 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:39:37.871127 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:39:37.871181 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:39:37.871206 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:39:37.871273 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:39:37.871706 | orchestrator | 2025-04-10 00:39:37.872361 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-10 00:39:37.872755 | orchestrator | Thursday 10 April 2025 00:39:37 +0000 (0:00:01.789) 0:00:18.313 ******** 2025-04-10 00:39:38.027046 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:39:38.106484 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:39:38.188503 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:39:38.264298 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:39:38.527152 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:39:38.671382 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:39:38.671549 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:39:38.672115 | orchestrator | 2025-04-10 00:39:38.673162 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-10 00:39:38.673370 | orchestrator | 2025-04-10 00:39:38.676433 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-10 00:39:41.134937 | orchestrator | Thursday 10 April 2025 00:39:38 +0000 (0:00:00.806) 0:00:19.119 ******** 2025-04-10 00:39:41.135108 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:39:41.135344 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:39:41.136035 | orchestrator | ok: [testbed-manager] 2025-04-10 00:39:41.136740 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:39:41.138653 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:39:41.139680 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:39:41.140540 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:39:41.141268 | orchestrator | 2025-04-10 00:39:41.141697 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:39:41.143216 | orchestrator | 2025-04-10 00:39:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:39:41.143448 | orchestrator | 2025-04-10 00:39:41 | INFO  | Please wait and do not abort execution. 2025-04-10 00:39:41.143492 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:39:41.144454 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:41.144483 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:41.145151 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:41.146329 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:41.147104 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:41.147610 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:41.148636 | orchestrator | 2025-04-10 00:39:41.149396 | orchestrator | Thursday 10 April 2025 00:39:41 +0000 (0:00:02.460) 0:00:21.580 ******** 2025-04-10 00:39:41.150008 | orchestrator | =============================================================================== 2025-04-10 00:39:41.151053 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.75s 2025-04-10 00:39:41.151923 | orchestrator | Apply netplan configuration --------------------------------------------- 2.82s 2025-04-10 00:39:41.152584 | orchestrator | Install python3-docker -------------------------------------------------- 2.46s 2025-04-10 00:39:41.153161 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-04-10 00:39:41.153638 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2025-04-10 00:39:41.154347 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.74s 2025-04-10 00:39:41.155087 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.66s 2025-04-10 00:39:41.155364 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2025-04-10 00:39:41.156081 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-04-10 00:39:41.156665 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.81s 2025-04-10 00:39:41.157169 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-04-10 00:39:41.157760 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-04-10 00:39:41.772076 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-10 00:39:43.289331 | orchestrator | 2025-04-10 00:39:43 | INFO  | Task 381d791d-4881-49bd-ae20-20cee0730c2c (reboot) was prepared for execution. 2025-04-10 00:39:46.594305 | orchestrator | 2025-04-10 00:39:43 | INFO  | It takes a moment until task 381d791d-4881-49bd-ae20-20cee0730c2c (reboot) has been started and output is visible here. 2025-04-10 00:39:46.595265 | orchestrator | 2025-04-10 00:39:46.597226 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-10 00:39:46.597275 | orchestrator | 2025-04-10 00:39:46.697369 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-10 00:39:46.697501 | orchestrator | Thursday 10 April 2025 00:39:46 +0000 (0:00:00.157) 0:00:00.157 ******** 2025-04-10 00:39:46.697538 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:39:46.698608 | orchestrator | 2025-04-10 00:39:46.698929 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-10 00:39:46.699524 | orchestrator | Thursday 10 April 2025 00:39:46 +0000 (0:00:00.104) 0:00:00.262 ******** 2025-04-10 00:39:47.667076 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:39:47.667303 | orchestrator | 2025-04-10 00:39:47.667339 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-10 00:39:47.667673 | orchestrator | Thursday 10 April 2025 00:39:47 +0000 (0:00:00.967) 0:00:01.230 ******** 2025-04-10 00:39:47.793880 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:39:47.794347 | orchestrator | 2025-04-10 00:39:47.795258 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-10 00:39:47.795910 | orchestrator | 2025-04-10 00:39:47.796475 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-10 00:39:47.797519 | orchestrator | Thursday 10 April 2025 00:39:47 +0000 (0:00:00.130) 0:00:01.360 ******** 2025-04-10 00:39:47.891424 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:39:47.892029 | orchestrator | 2025-04-10 00:39:47.892844 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-10 00:39:47.894090 | orchestrator | Thursday 10 April 2025 00:39:47 +0000 (0:00:00.097) 0:00:01.457 ******** 2025-04-10 00:39:48.567902 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:39:48.568417 | orchestrator | 2025-04-10 00:39:48.568463 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-10 00:39:48.569265 | orchestrator | Thursday 10 April 2025 00:39:48 +0000 (0:00:00.677) 0:00:02.134 ******** 2025-04-10 00:39:48.686604 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:39:48.688034 | orchestrator | 2025-04-10 00:39:48.688560 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-10 00:39:48.689033 | orchestrator | 2025-04-10 00:39:48.689620 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-10 00:39:48.690535 | orchestrator | Thursday 10 April 2025 00:39:48 +0000 (0:00:00.114) 0:00:02.249 ******** 2025-04-10 00:39:48.786428 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:39:48.786807 | orchestrator | 2025-04-10 00:39:48.787064 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-10 00:39:48.787094 | orchestrator | Thursday 10 April 2025 00:39:48 +0000 (0:00:00.103) 0:00:02.353 ******** 2025-04-10 00:39:49.529820 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:39:49.530255 | orchestrator | 2025-04-10 00:39:49.531304 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-10 00:39:49.531806 | orchestrator | Thursday 10 April 2025 00:39:49 +0000 (0:00:00.743) 0:00:03.096 ******** 2025-04-10 00:39:49.654207 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:39:49.656024 | orchestrator | 2025-04-10 00:39:49.657095 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-10 00:39:49.658422 | orchestrator | 2025-04-10 00:39:49.659121 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-10 00:39:49.659770 | orchestrator | Thursday 10 April 2025 00:39:49 +0000 (0:00:00.120) 0:00:03.217 ******** 2025-04-10 00:39:49.756787 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:39:49.757045 | orchestrator | 2025-04-10 00:39:49.757637 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-10 00:39:49.759429 | orchestrator | Thursday 10 April 2025 00:39:49 +0000 (0:00:00.105) 0:00:03.322 ******** 2025-04-10 00:39:50.407142 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:39:50.407719 | orchestrator | 2025-04-10 00:39:50.409350 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-10 00:39:50.513780 | orchestrator | Thursday 10 April 2025 00:39:50 +0000 (0:00:00.649) 0:00:03.972 ******** 2025-04-10 00:39:50.513905 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:39:50.514845 | orchestrator | 2025-04-10 00:39:50.515043 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-10 00:39:50.515396 | orchestrator | 2025-04-10 00:39:50.515465 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-10 00:39:50.516624 | orchestrator | Thursday 10 April 2025 00:39:50 +0000 (0:00:00.104) 0:00:04.077 ******** 2025-04-10 00:39:50.629200 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:39:50.629826 | orchestrator | 2025-04-10 00:39:50.630084 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-10 00:39:50.631088 | orchestrator | Thursday 10 April 2025 00:39:50 +0000 (0:00:00.117) 0:00:04.194 ******** 2025-04-10 00:39:51.312260 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:39:51.312500 | orchestrator | 2025-04-10 00:39:51.312563 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-10 00:39:51.312586 | orchestrator | Thursday 10 April 2025 00:39:51 +0000 (0:00:00.683) 0:00:04.878 ******** 2025-04-10 00:39:51.434628 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:39:51.434897 | orchestrator | 2025-04-10 00:39:51.436286 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-10 00:39:51.437906 | orchestrator | 2025-04-10 00:39:51.438013 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-10 00:39:51.438912 | orchestrator | Thursday 10 April 2025 00:39:51 +0000 (0:00:00.121) 0:00:04.999 ******** 2025-04-10 00:39:51.535865 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:39:51.536369 | orchestrator | 2025-04-10 00:39:51.537838 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-10 00:39:51.538342 | orchestrator | Thursday 10 April 2025 00:39:51 +0000 (0:00:00.102) 0:00:05.101 ******** 2025-04-10 00:39:52.261375 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:39:52.261599 | orchestrator | 2025-04-10 00:39:52.262486 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-10 00:39:52.263221 | orchestrator | Thursday 10 April 2025 00:39:52 +0000 (0:00:00.724) 0:00:05.826 ******** 2025-04-10 00:39:52.296912 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:39:52.297774 | orchestrator | 2025-04-10 00:39:52.299294 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:39:52.299381 | orchestrator | 2025-04-10 00:39:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:39:52.300402 | orchestrator | 2025-04-10 00:39:52 | INFO  | Please wait and do not abort execution. 2025-04-10 00:39:52.300435 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:52.301003 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:52.302934 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:52.303600 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:52.303989 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:52.304613 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:39:52.305692 | orchestrator | 2025-04-10 00:39:52.306397 | orchestrator | Thursday 10 April 2025 00:39:52 +0000 (0:00:00.037) 0:00:05.864 ******** 2025-04-10 00:39:52.307599 | orchestrator | =============================================================================== 2025-04-10 00:39:52.308298 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.45s 2025-04-10 00:39:52.308506 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2025-04-10 00:39:52.309044 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-04-10 00:39:52.898212 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-10 00:39:54.423874 | orchestrator | 2025-04-10 00:39:54 | INFO  | Task 7aa86985-7aec-4814-ad79-5eb91124981f (wait-for-connection) was prepared for execution. 2025-04-10 00:39:57.684023 | orchestrator | 2025-04-10 00:39:54 | INFO  | It takes a moment until task 7aa86985-7aec-4814-ad79-5eb91124981f (wait-for-connection) has been started and output is visible here. 2025-04-10 00:39:57.684198 | orchestrator | 2025-04-10 00:39:57.686839 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-10 00:39:57.686873 | orchestrator | 2025-04-10 00:39:57.686895 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-10 00:39:57.687332 | orchestrator | Thursday 10 April 2025 00:39:57 +0000 (0:00:00.199) 0:00:00.199 ******** 2025-04-10 00:40:10.655237 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:40:10.655436 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:40:10.655462 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:40:10.655477 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:40:10.655492 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:40:10.655511 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:40:10.656148 | orchestrator | 2025-04-10 00:40:10.657206 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:40:10.657699 | orchestrator | 2025-04-10 00:40:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:40:10.658308 | orchestrator | 2025-04-10 00:40:10 | INFO  | Please wait and do not abort execution. 2025-04-10 00:40:10.658344 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:10.658681 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:10.659158 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:10.659664 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:10.660780 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:10.661682 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:10.662725 | orchestrator | 2025-04-10 00:40:10.663490 | orchestrator | Thursday 10 April 2025 00:40:10 +0000 (0:00:12.968) 0:00:13.168 ******** 2025-04-10 00:40:10.664139 | orchestrator | =============================================================================== 2025-04-10 00:40:10.666141 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.97s 2025-04-10 00:40:11.279332 | orchestrator | + osism apply hddtemp 2025-04-10 00:40:12.834895 | orchestrator | 2025-04-10 00:40:12 | INFO  | Task 1f687f93-4493-4797-8755-cd3c72e2826b (hddtemp) was prepared for execution. 2025-04-10 00:40:16.172584 | orchestrator | 2025-04-10 00:40:12 | INFO  | It takes a moment until task 1f687f93-4493-4797-8755-cd3c72e2826b (hddtemp) has been started and output is visible here. 2025-04-10 00:40:16.172731 | orchestrator | 2025-04-10 00:40:16.173032 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-10 00:40:16.176386 | orchestrator | 2025-04-10 00:40:16.330312 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-10 00:40:16.330432 | orchestrator | Thursday 10 April 2025 00:40:16 +0000 (0:00:00.245) 0:00:00.245 ******** 2025-04-10 00:40:16.330465 | orchestrator | ok: [testbed-manager] 2025-04-10 00:40:16.418339 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:40:16.505612 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:40:16.589881 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:40:16.678128 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:40:16.960633 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:40:16.964552 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:40:16.964680 | orchestrator | 2025-04-10 00:40:16.965124 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-10 00:40:16.965490 | orchestrator | Thursday 10 April 2025 00:40:16 +0000 (0:00:00.785) 0:00:01.031 ******** 2025-04-10 00:40:18.290599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:40:18.291160 | orchestrator | 2025-04-10 00:40:18.292158 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-10 00:40:18.296108 | orchestrator | Thursday 10 April 2025 00:40:18 +0000 (0:00:01.331) 0:00:02.362 ******** 2025-04-10 00:40:20.320200 | orchestrator | ok: [testbed-manager] 2025-04-10 00:40:20.323279 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:40:20.325086 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:40:20.325190 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:40:20.326167 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:40:20.326841 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:40:20.328512 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:40:20.330145 | orchestrator | 2025-04-10 00:40:20.331164 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-10 00:40:20.332285 | orchestrator | Thursday 10 April 2025 00:40:20 +0000 (0:00:02.030) 0:00:04.393 ******** 2025-04-10 00:40:20.896090 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:40:20.981504 | orchestrator | changed: [testbed-manager] 2025-04-10 00:40:21.513543 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:40:21.515704 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:40:21.516525 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:40:21.516578 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:40:21.516607 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:40:21.517015 | orchestrator | 2025-04-10 00:40:21.517053 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-10 00:40:21.517449 | orchestrator | Thursday 10 April 2025 00:40:21 +0000 (0:00:01.189) 0:00:05.582 ******** 2025-04-10 00:40:22.840732 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:40:22.841051 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:40:22.842177 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:40:22.843192 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:40:22.843602 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:40:22.844817 | orchestrator | ok: [testbed-manager] 2025-04-10 00:40:22.845298 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:40:22.846233 | orchestrator | 2025-04-10 00:40:22.847016 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-10 00:40:22.847357 | orchestrator | Thursday 10 April 2025 00:40:22 +0000 (0:00:01.327) 0:00:06.910 ******** 2025-04-10 00:40:23.120010 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:40:23.212125 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:40:23.296857 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:40:23.379106 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:40:23.489717 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:40:23.491170 | orchestrator | changed: [testbed-manager] 2025-04-10 00:40:23.492460 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:40:23.493343 | orchestrator | 2025-04-10 00:40:23.494924 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-10 00:40:23.496110 | orchestrator | Thursday 10 April 2025 00:40:23 +0000 (0:00:00.654) 0:00:07.564 ******** 2025-04-10 00:40:36.405151 | orchestrator | changed: [testbed-manager] 2025-04-10 00:40:36.405330 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:40:36.405356 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:40:36.405378 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:40:36.405713 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:40:36.408271 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:40:36.408776 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:40:36.409772 | orchestrator | 2025-04-10 00:40:36.411634 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-10 00:40:36.411667 | orchestrator | Thursday 10 April 2025 00:40:36 +0000 (0:00:12.905) 0:00:20.469 ******** 2025-04-10 00:40:37.680233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:40:37.693389 | orchestrator | 2025-04-10 00:40:37.693472 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-10 00:40:40.376101 | orchestrator | Thursday 10 April 2025 00:40:37 +0000 (0:00:01.282) 0:00:21.751 ******** 2025-04-10 00:40:40.376240 | orchestrator | changed: [testbed-manager] 2025-04-10 00:40:40.377763 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:40:40.379034 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:40:40.380676 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:40:40.381384 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:40:40.382475 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:40:40.383282 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:40:40.384526 | orchestrator | 2025-04-10 00:40:40.385134 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:40:40.386180 | orchestrator | 2025-04-10 00:40:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:40:40.387695 | orchestrator | 2025-04-10 00:40:40 | INFO  | Please wait and do not abort execution. 2025-04-10 00:40:40.387727 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:40:40.388652 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:40.389515 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:40.390711 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:40.391614 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:40.392346 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:40.393846 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:40.394150 | orchestrator | 2025-04-10 00:40:40.394778 | orchestrator | Thursday 10 April 2025 00:40:40 +0000 (0:00:02.698) 0:00:24.449 ******** 2025-04-10 00:40:40.396059 | orchestrator | =============================================================================== 2025-04-10 00:40:40.396251 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.91s 2025-04-10 00:40:40.396376 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.70s 2025-04-10 00:40:40.397187 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.03s 2025-04-10 00:40:40.397692 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2025-04-10 00:40:40.397954 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.33s 2025-04-10 00:40:40.398987 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2025-04-10 00:40:40.399854 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-04-10 00:40:40.400552 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.79s 2025-04-10 00:40:40.401218 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.65s 2025-04-10 00:40:41.035260 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-10 00:40:42.462350 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-10 00:40:42.463137 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-10 00:40:42.463177 | orchestrator | + local max_attempts=60 2025-04-10 00:40:42.463195 | orchestrator | + local name=ceph-ansible 2025-04-10 00:40:42.463212 | orchestrator | + local attempt_num=1 2025-04-10 00:40:42.463235 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-10 00:40:42.500322 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-10 00:40:42.500719 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-10 00:40:42.500746 | orchestrator | + local max_attempts=60 2025-04-10 00:40:42.500758 | orchestrator | + local name=kolla-ansible 2025-04-10 00:40:42.500769 | orchestrator | + local attempt_num=1 2025-04-10 00:40:42.500784 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-10 00:40:42.541486 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-10 00:40:42.541693 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-10 00:40:42.541719 | orchestrator | + local max_attempts=60 2025-04-10 00:40:42.541734 | orchestrator | + local name=osism-ansible 2025-04-10 00:40:42.541749 | orchestrator | + local attempt_num=1 2025-04-10 00:40:42.541767 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-10 00:40:42.569516 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-10 00:40:42.773039 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-10 00:40:42.773916 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-10 00:40:42.774090 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-10 00:40:42.950173 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-10 00:40:43.124491 | orchestrator | ARA in osism-ansible already disabled. 2025-04-10 00:40:43.307679 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-10 00:40:43.308443 | orchestrator | + osism apply gather-facts 2025-04-10 00:40:44.981185 | orchestrator | 2025-04-10 00:40:44 | INFO  | Task e30e2455-21dc-4f1b-80f4-a9e158f11e34 (gather-facts) was prepared for execution. 2025-04-10 00:40:48.313611 | orchestrator | 2025-04-10 00:40:44 | INFO  | It takes a moment until task e30e2455-21dc-4f1b-80f4-a9e158f11e34 (gather-facts) has been started and output is visible here. 2025-04-10 00:40:48.313766 | orchestrator | 2025-04-10 00:40:48.313921 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-10 00:40:48.314826 | orchestrator | 2025-04-10 00:40:48.320040 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-10 00:40:53.449411 | orchestrator | Thursday 10 April 2025 00:40:48 +0000 (0:00:00.186) 0:00:00.186 ******** 2025-04-10 00:40:53.449551 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:40:53.450851 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:40:53.450896 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:40:53.450918 | orchestrator | ok: [testbed-manager] 2025-04-10 00:40:53.451782 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:40:53.454067 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:40:53.454901 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:40:53.455751 | orchestrator | 2025-04-10 00:40:53.456772 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-10 00:40:53.457244 | orchestrator | 2025-04-10 00:40:53.458234 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-10 00:40:53.458951 | orchestrator | Thursday 10 April 2025 00:40:53 +0000 (0:00:05.136) 0:00:05.323 ******** 2025-04-10 00:40:53.628445 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:40:53.699818 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:40:53.782384 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:40:53.862199 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:40:53.941138 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:40:53.985529 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:40:53.985719 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:40:53.986527 | orchestrator | 2025-04-10 00:40:53.987076 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:40:53.987567 | orchestrator | 2025-04-10 00:40:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:40:53.987823 | orchestrator | 2025-04-10 00:40:53 | INFO  | Please wait and do not abort execution. 2025-04-10 00:40:53.989063 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.990438 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.991277 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.992441 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.993213 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.993624 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.994499 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 00:40:53.994876 | orchestrator | 2025-04-10 00:40:53.995601 | orchestrator | Thursday 10 April 2025 00:40:53 +0000 (0:00:00.539) 0:00:05.863 ******** 2025-04-10 00:40:53.996086 | orchestrator | =============================================================================== 2025-04-10 00:40:53.996450 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.14s 2025-04-10 00:40:53.997304 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-04-10 00:40:54.589790 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-10 00:40:54.609610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-10 00:40:54.629025 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-10 00:40:54.648489 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-10 00:40:54.661458 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-10 00:40:54.678454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-10 00:40:54.701823 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-10 00:40:54.723765 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-10 00:40:54.742307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-10 00:40:54.761330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-10 00:40:54.776859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-10 00:40:54.791301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-10 00:40:54.809144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-10 00:40:54.823185 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-10 00:40:54.836218 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-10 00:40:54.849097 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-10 00:40:54.861778 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-10 00:40:54.879346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-10 00:40:54.898586 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-10 00:40:54.918854 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-10 00:40:54.936831 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-10 00:40:55.077725 | orchestrator | changed 2025-04-10 00:40:55.132117 | 2025-04-10 00:40:55.132222 | TASK [Deploy services] 2025-04-10 00:40:55.247466 | orchestrator | skipping: Conditional result was False 2025-04-10 00:40:55.261178 | 2025-04-10 00:40:55.261290 | TASK [Deploy in a nutshell] 2025-04-10 00:40:55.954454 | orchestrator | + set -e 2025-04-10 00:40:55.954653 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-10 00:40:55.954684 | orchestrator | ++ export INTERACTIVE=false 2025-04-10 00:40:55.954702 | orchestrator | ++ INTERACTIVE=false 2025-04-10 00:40:55.954746 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-10 00:40:55.954764 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-10 00:40:55.954780 | orchestrator | + source /opt/manager-vars.sh 2025-04-10 00:40:55.954803 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-10 00:40:55.954827 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-10 00:40:55.954843 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-10 00:40:55.954857 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-10 00:40:55.954872 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-10 00:40:55.954886 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-10 00:40:55.954900 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-10 00:40:55.954914 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-10 00:40:55.954929 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-10 00:40:55.954944 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-10 00:40:55.954989 | orchestrator | ++ export ARA=false 2025-04-10 00:40:55.955005 | orchestrator | ++ ARA=false 2025-04-10 00:40:55.955020 | orchestrator | ++ export TEMPEST=false 2025-04-10 00:40:55.955034 | orchestrator | ++ TEMPEST=false 2025-04-10 00:40:55.955047 | orchestrator | ++ export IS_ZUUL=true 2025-04-10 00:40:55.955061 | orchestrator | ++ IS_ZUUL=true 2025-04-10 00:40:55.955075 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:40:55.955089 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.103 2025-04-10 00:40:55.955103 | orchestrator | ++ export EXTERNAL_API=false 2025-04-10 00:40:55.955117 | orchestrator | ++ EXTERNAL_API=false 2025-04-10 00:40:55.955130 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-10 00:40:55.955144 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-10 00:40:55.955158 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-10 00:40:55.955172 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-10 00:40:55.955190 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-10 00:40:55.955212 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-10 00:40:55.955235 | orchestrator | 2025-04-10 00:40:55.955853 | orchestrator | # PULL IMAGES 2025-04-10 00:40:55.955873 | orchestrator | 2025-04-10 00:40:55.955888 | orchestrator | + echo 2025-04-10 00:40:55.955902 | orchestrator | + echo '# PULL IMAGES' 2025-04-10 00:40:55.955916 | orchestrator | + echo 2025-04-10 00:40:55.955935 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-10 00:40:56.020842 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-10 00:40:57.484359 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-10 00:40:57.484534 | orchestrator | 2025-04-10 00:40:57 | INFO  | Trying to run play pull-images in environment custom 2025-04-10 00:40:57.534309 | orchestrator | 2025-04-10 00:40:57 | INFO  | Task c869aa4c-b555-4d56-8e75-1f0e6763a780 (pull-images) was prepared for execution. 2025-04-10 00:41:00.777075 | orchestrator | 2025-04-10 00:40:57 | INFO  | It takes a moment until task c869aa4c-b555-4d56-8e75-1f0e6763a780 (pull-images) has been started and output is visible here. 2025-04-10 00:41:00.777231 | orchestrator | 2025-04-10 00:41:00.778443 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-10 00:41:00.779520 | orchestrator | 2025-04-10 00:41:00.781709 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-10 00:41:00.782861 | orchestrator | Thursday 10 April 2025 00:41:00 +0000 (0:00:00.148) 0:00:00.148 ******** 2025-04-10 00:41:38.977309 | orchestrator | changed: [testbed-manager] 2025-04-10 00:42:29.072750 | orchestrator | 2025-04-10 00:42:29.072911 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-10 00:42:29.072933 | orchestrator | Thursday 10 April 2025 00:41:38 +0000 (0:00:38.198) 0:00:38.347 ******** 2025-04-10 00:42:29.073019 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-10 00:42:29.074094 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-10 00:42:29.074124 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-10 00:42:29.074145 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-10 00:42:29.074782 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-10 00:42:29.074811 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-10 00:42:29.074832 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-10 00:42:29.075735 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-10 00:42:29.078000 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-10 00:42:29.078335 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-10 00:42:29.079119 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-10 00:42:29.079613 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-10 00:42:29.080546 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-10 00:42:29.080863 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-10 00:42:29.081373 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-10 00:42:29.081949 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-10 00:42:29.082380 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-10 00:42:29.082886 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-10 00:42:29.083410 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-10 00:42:29.083885 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-10 00:42:29.084139 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-10 00:42:29.084689 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-10 00:42:29.085033 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-10 00:42:29.085791 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-10 00:42:29.086229 | orchestrator | 2025-04-10 00:42:29.087402 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:42:29.087869 | orchestrator | 2025-04-10 00:42:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:42:29.087895 | orchestrator | 2025-04-10 00:42:29 | INFO  | Please wait and do not abort execution. 2025-04-10 00:42:29.087916 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:42:29.088673 | orchestrator | 2025-04-10 00:42:29.089144 | orchestrator | Thursday 10 April 2025 00:42:29 +0000 (0:00:50.098) 0:01:28.446 ******** 2025-04-10 00:42:29.089581 | orchestrator | =============================================================================== 2025-04-10 00:42:29.090097 | orchestrator | Pull other images ------------------------------------------------------ 50.10s 2025-04-10 00:42:29.090644 | orchestrator | Pull keystone image ---------------------------------------------------- 38.20s 2025-04-10 00:42:31.378002 | orchestrator | 2025-04-10 00:42:31 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-10 00:42:31.429073 | orchestrator | 2025-04-10 00:42:31 | INFO  | Task 787cfc4f-836f-4801-b852-ac9a6dba5487 (wipe-partitions) was prepared for execution. 2025-04-10 00:42:34.656595 | orchestrator | 2025-04-10 00:42:31 | INFO  | It takes a moment until task 787cfc4f-836f-4801-b852-ac9a6dba5487 (wipe-partitions) has been started and output is visible here. 2025-04-10 00:42:34.656743 | orchestrator | 2025-04-10 00:42:34.659260 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-10 00:42:34.659298 | orchestrator | 2025-04-10 00:42:34.663847 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-10 00:42:35.274242 | orchestrator | Thursday 10 April 2025 00:42:34 +0000 (0:00:00.132) 0:00:00.132 ******** 2025-04-10 00:42:35.274414 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:42:35.275342 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:42:35.275377 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:42:35.276306 | orchestrator | 2025-04-10 00:42:35.276824 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-10 00:42:35.276864 | orchestrator | Thursday 10 April 2025 00:42:35 +0000 (0:00:00.615) 0:00:00.748 ******** 2025-04-10 00:42:35.473166 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:42:35.568692 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:42:35.572093 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:42:36.319644 | orchestrator | 2025-04-10 00:42:36.319811 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-10 00:42:36.319833 | orchestrator | Thursday 10 April 2025 00:42:35 +0000 (0:00:00.297) 0:00:01.045 ******** 2025-04-10 00:42:36.319871 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:42:36.326391 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:42:36.326429 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:42:36.326444 | orchestrator | 2025-04-10 00:42:36.326459 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-10 00:42:36.326482 | orchestrator | Thursday 10 April 2025 00:42:36 +0000 (0:00:00.745) 0:00:01.790 ******** 2025-04-10 00:42:36.488617 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:42:36.592155 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:42:36.594835 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:42:36.596313 | orchestrator | 2025-04-10 00:42:36.596356 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-10 00:42:36.596564 | orchestrator | Thursday 10 April 2025 00:42:36 +0000 (0:00:00.277) 0:00:02.068 ******** 2025-04-10 00:42:37.816782 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-10 00:42:37.817151 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-10 00:42:37.817216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-10 00:42:37.817358 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-10 00:42:37.817400 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-10 00:42:37.817894 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-10 00:42:37.819192 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-10 00:42:37.819304 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-10 00:42:37.819637 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-10 00:42:37.823040 | orchestrator | 2025-04-10 00:42:37.823400 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-10 00:42:37.823883 | orchestrator | Thursday 10 April 2025 00:42:37 +0000 (0:00:01.221) 0:00:03.290 ******** 2025-04-10 00:42:39.211680 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-10 00:42:39.212461 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-10 00:42:39.212992 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-10 00:42:39.213012 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-10 00:42:39.214351 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-10 00:42:39.214730 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-10 00:42:39.214774 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-10 00:42:39.215103 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-10 00:42:39.215411 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-10 00:42:39.215729 | orchestrator | 2025-04-10 00:42:39.216141 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-10 00:42:39.216450 | orchestrator | Thursday 10 April 2025 00:42:39 +0000 (0:00:01.398) 0:00:04.688 ******** 2025-04-10 00:42:41.480996 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-10 00:42:41.482405 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-10 00:42:41.482840 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-10 00:42:41.484548 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-10 00:42:41.485142 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-10 00:42:41.485619 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-10 00:42:41.488242 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-10 00:42:41.488743 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-10 00:42:41.488774 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-10 00:42:41.489046 | orchestrator | 2025-04-10 00:42:41.489503 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-10 00:42:41.489769 | orchestrator | Thursday 10 April 2025 00:42:41 +0000 (0:00:02.269) 0:00:06.958 ******** 2025-04-10 00:42:42.107699 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:42:42.110441 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:42:42.110588 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:42:42.111140 | orchestrator | 2025-04-10 00:42:42.111483 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-10 00:42:42.111723 | orchestrator | Thursday 10 April 2025 00:42:42 +0000 (0:00:00.629) 0:00:07.587 ******** 2025-04-10 00:42:42.772901 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:42:42.778243 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:42:42.779715 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:42:42.781091 | orchestrator | 2025-04-10 00:42:42.782238 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:42:42.782457 | orchestrator | 2025-04-10 00:42:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:42:42.783026 | orchestrator | 2025-04-10 00:42:42 | INFO  | Please wait and do not abort execution. 2025-04-10 00:42:42.783056 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:42.783718 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:42.784464 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:42.785027 | orchestrator | 2025-04-10 00:42:42.785625 | orchestrator | Thursday 10 April 2025 00:42:42 +0000 (0:00:00.664) 0:00:08.252 ******** 2025-04-10 00:42:42.786082 | orchestrator | =============================================================================== 2025-04-10 00:42:42.786484 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.27s 2025-04-10 00:42:42.787039 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-04-10 00:42:42.787417 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2025-04-10 00:42:42.787867 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.75s 2025-04-10 00:42:42.788643 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2025-04-10 00:42:42.788884 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-04-10 00:42:42.789277 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2025-04-10 00:42:42.789698 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2025-04-10 00:42:42.790140 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-04-10 00:42:44.992151 | orchestrator | 2025-04-10 00:42:44 | INFO  | Task 3c7e73e9-0a63-4616-ae2b-746513838773 (facts) was prepared for execution. 2025-04-10 00:42:49.258783 | orchestrator | 2025-04-10 00:42:44 | INFO  | It takes a moment until task 3c7e73e9-0a63-4616-ae2b-746513838773 (facts) has been started and output is visible here. 2025-04-10 00:42:49.258894 | orchestrator | 2025-04-10 00:42:49.264165 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-10 00:42:49.264231 | orchestrator | 2025-04-10 00:42:49.266096 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-10 00:42:49.267355 | orchestrator | Thursday 10 April 2025 00:42:49 +0000 (0:00:00.215) 0:00:00.215 ******** 2025-04-10 00:42:50.343504 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:42:50.344335 | orchestrator | ok: [testbed-manager] 2025-04-10 00:42:50.348220 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:42:50.349016 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:42:50.349038 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:42:50.349046 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:42:50.349056 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:42:50.349791 | orchestrator | 2025-04-10 00:42:50.351286 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-10 00:42:50.351843 | orchestrator | Thursday 10 April 2025 00:42:50 +0000 (0:00:01.089) 0:00:01.305 ******** 2025-04-10 00:42:50.544394 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:42:50.643470 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:42:50.729109 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:42:50.812491 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:42:50.903144 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:42:51.901375 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:42:51.901579 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:42:51.901610 | orchestrator | 2025-04-10 00:42:51.902339 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-10 00:42:51.902403 | orchestrator | 2025-04-10 00:42:51.902996 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-10 00:42:51.903399 | orchestrator | Thursday 10 April 2025 00:42:51 +0000 (0:00:01.559) 0:00:02.864 ******** 2025-04-10 00:42:56.711730 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:42:56.713750 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:42:56.713837 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:42:56.717716 | orchestrator | ok: [testbed-manager] 2025-04-10 00:42:56.718147 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:42:56.718894 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:42:56.718919 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:42:56.719873 | orchestrator | 2025-04-10 00:42:56.721898 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-10 00:42:57.339905 | orchestrator | 2025-04-10 00:42:57.340035 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-10 00:42:57.340047 | orchestrator | Thursday 10 April 2025 00:42:56 +0000 (0:00:04.812) 0:00:07.676 ******** 2025-04-10 00:42:57.340076 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:42:57.479769 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:42:57.590331 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:42:57.684059 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:42:57.766390 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:42:57.804858 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:42:57.807198 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:42:57.807546 | orchestrator | 2025-04-10 00:42:57.807573 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:42:57.807589 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.807606 | orchestrator | 2025-04-10 00:42:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:42:57.807622 | orchestrator | 2025-04-10 00:42:57 | INFO  | Please wait and do not abort execution. 2025-04-10 00:42:57.807637 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.807656 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.808073 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.808508 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.809009 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.811367 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:42:57.811831 | orchestrator | 2025-04-10 00:42:57.811872 | orchestrator | Thursday 10 April 2025 00:42:57 +0000 (0:00:01.094) 0:00:08.771 ******** 2025-04-10 00:42:57.811888 | orchestrator | =============================================================================== 2025-04-10 00:42:57.811903 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2025-04-10 00:42:57.811925 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.56s 2025-04-10 00:42:57.812233 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.10s 2025-04-10 00:42:57.812876 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-04-10 00:43:00.160014 | orchestrator | 2025-04-10 00:43:00 | INFO  | Task 2703048f-4e74-4243-bf44-e0b926908ee0 (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-10 00:43:00.160843 | orchestrator | 2025-04-10 00:43:00 | INFO  | It takes a moment until task 2703048f-4e74-4243-bf44-e0b926908ee0 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-10 00:43:03.786815 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-10 00:43:04.416833 | orchestrator | 2025-04-10 00:43:04.418743 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-10 00:43:04.419325 | orchestrator | 2025-04-10 00:43:04.420050 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-10 00:43:04.421264 | orchestrator | Thursday 10 April 2025 00:43:04 +0000 (0:00:00.549) 0:00:00.549 ******** 2025-04-10 00:43:04.691262 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-10 00:43:04.691523 | orchestrator | 2025-04-10 00:43:04.691552 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-10 00:43:04.691874 | orchestrator | Thursday 10 April 2025 00:43:04 +0000 (0:00:00.269) 0:00:00.818 ******** 2025-04-10 00:43:04.923215 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:43:04.926244 | orchestrator | 2025-04-10 00:43:04.927583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:04.930469 | orchestrator | Thursday 10 April 2025 00:43:04 +0000 (0:00:00.235) 0:00:01.054 ******** 2025-04-10 00:43:05.503503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-10 00:43:05.504108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-10 00:43:05.504154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-10 00:43:05.507222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-10 00:43:05.510348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-10 00:43:05.513002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-10 00:43:05.514497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-10 00:43:05.514523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-10 00:43:05.514538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-10 00:43:05.514557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-10 00:43:05.515066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-10 00:43:05.515235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-10 00:43:05.515264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-10 00:43:05.515498 | orchestrator | 2025-04-10 00:43:05.515643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:05.518189 | orchestrator | Thursday 10 April 2025 00:43:05 +0000 (0:00:00.583) 0:00:01.637 ******** 2025-04-10 00:43:05.738754 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:05.739901 | orchestrator | 2025-04-10 00:43:05.740234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:05.740922 | orchestrator | Thursday 10 April 2025 00:43:05 +0000 (0:00:00.235) 0:00:01.873 ******** 2025-04-10 00:43:05.984667 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:05.984864 | orchestrator | 2025-04-10 00:43:05.984897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:05.984921 | orchestrator | Thursday 10 April 2025 00:43:05 +0000 (0:00:00.241) 0:00:02.115 ******** 2025-04-10 00:43:06.195082 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:06.195809 | orchestrator | 2025-04-10 00:43:06.196320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:06.198407 | orchestrator | Thursday 10 April 2025 00:43:06 +0000 (0:00:00.211) 0:00:02.326 ******** 2025-04-10 00:43:06.444190 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:06.444314 | orchestrator | 2025-04-10 00:43:06.444669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:06.445205 | orchestrator | Thursday 10 April 2025 00:43:06 +0000 (0:00:00.251) 0:00:02.577 ******** 2025-04-10 00:43:06.753566 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:06.756167 | orchestrator | 2025-04-10 00:43:06.756474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:06.756507 | orchestrator | Thursday 10 April 2025 00:43:06 +0000 (0:00:00.305) 0:00:02.883 ******** 2025-04-10 00:43:06.986595 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:06.989625 | orchestrator | 2025-04-10 00:43:06.990765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:06.990815 | orchestrator | Thursday 10 April 2025 00:43:06 +0000 (0:00:00.237) 0:00:03.121 ******** 2025-04-10 00:43:07.280582 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:07.284551 | orchestrator | 2025-04-10 00:43:07.287553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:07.287889 | orchestrator | Thursday 10 April 2025 00:43:07 +0000 (0:00:00.291) 0:00:03.412 ******** 2025-04-10 00:43:07.680377 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:07.681215 | orchestrator | 2025-04-10 00:43:07.681256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:07.681280 | orchestrator | Thursday 10 April 2025 00:43:07 +0000 (0:00:00.396) 0:00:03.808 ******** 2025-04-10 00:43:08.488069 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b) 2025-04-10 00:43:08.489281 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b) 2025-04-10 00:43:08.489607 | orchestrator | 2025-04-10 00:43:08.490058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:08.490438 | orchestrator | Thursday 10 April 2025 00:43:08 +0000 (0:00:00.809) 0:00:04.618 ******** 2025-04-10 00:43:09.445435 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7) 2025-04-10 00:43:09.448041 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7) 2025-04-10 00:43:09.448088 | orchestrator | 2025-04-10 00:43:09.448188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:09.448797 | orchestrator | Thursday 10 April 2025 00:43:09 +0000 (0:00:00.956) 0:00:05.575 ******** 2025-04-10 00:43:09.947704 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3) 2025-04-10 00:43:09.948132 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3) 2025-04-10 00:43:09.948376 | orchestrator | 2025-04-10 00:43:09.949083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:09.951184 | orchestrator | Thursday 10 April 2025 00:43:09 +0000 (0:00:00.502) 0:00:06.078 ******** 2025-04-10 00:43:10.627648 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2) 2025-04-10 00:43:10.628688 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2) 2025-04-10 00:43:10.632222 | orchestrator | 2025-04-10 00:43:10.632633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:10.632804 | orchestrator | Thursday 10 April 2025 00:43:10 +0000 (0:00:00.680) 0:00:06.759 ******** 2025-04-10 00:43:11.069631 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-10 00:43:11.071633 | orchestrator | 2025-04-10 00:43:11.073899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:11.075168 | orchestrator | Thursday 10 April 2025 00:43:11 +0000 (0:00:00.442) 0:00:07.202 ******** 2025-04-10 00:43:11.529253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-10 00:43:11.532216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-10 00:43:11.532915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-10 00:43:11.534386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-10 00:43:11.537571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-10 00:43:11.540103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-10 00:43:11.542157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-10 00:43:11.543437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-10 00:43:11.544690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-10 00:43:11.546355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-10 00:43:11.547865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-10 00:43:11.549545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-10 00:43:11.549589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-10 00:43:11.552648 | orchestrator | 2025-04-10 00:43:11.554915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:11.556187 | orchestrator | Thursday 10 April 2025 00:43:11 +0000 (0:00:00.455) 0:00:07.657 ******** 2025-04-10 00:43:11.744451 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:11.744649 | orchestrator | 2025-04-10 00:43:11.746673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:11.748428 | orchestrator | Thursday 10 April 2025 00:43:11 +0000 (0:00:00.217) 0:00:07.875 ******** 2025-04-10 00:43:11.986329 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:11.988532 | orchestrator | 2025-04-10 00:43:11.988562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:11.990387 | orchestrator | Thursday 10 April 2025 00:43:11 +0000 (0:00:00.242) 0:00:08.118 ******** 2025-04-10 00:43:12.200326 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:12.202933 | orchestrator | 2025-04-10 00:43:12.203381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:12.204197 | orchestrator | Thursday 10 April 2025 00:43:12 +0000 (0:00:00.213) 0:00:08.331 ******** 2025-04-10 00:43:12.436162 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:12.443206 | orchestrator | 2025-04-10 00:43:12.443250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:12.443342 | orchestrator | Thursday 10 April 2025 00:43:12 +0000 (0:00:00.236) 0:00:08.567 ******** 2025-04-10 00:43:13.321165 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:13.321725 | orchestrator | 2025-04-10 00:43:13.322102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:13.324013 | orchestrator | Thursday 10 April 2025 00:43:13 +0000 (0:00:00.885) 0:00:09.453 ******** 2025-04-10 00:43:13.590481 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:13.590653 | orchestrator | 2025-04-10 00:43:13.591464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:13.591984 | orchestrator | Thursday 10 April 2025 00:43:13 +0000 (0:00:00.268) 0:00:09.721 ******** 2025-04-10 00:43:13.869830 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:13.870497 | orchestrator | 2025-04-10 00:43:13.870733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:13.873342 | orchestrator | Thursday 10 April 2025 00:43:13 +0000 (0:00:00.282) 0:00:10.004 ******** 2025-04-10 00:43:14.122426 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:14.122587 | orchestrator | 2025-04-10 00:43:14.123419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:14.123694 | orchestrator | Thursday 10 April 2025 00:43:14 +0000 (0:00:00.252) 0:00:10.256 ******** 2025-04-10 00:43:14.902689 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-10 00:43:14.903838 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-10 00:43:14.905934 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-10 00:43:14.907133 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-10 00:43:14.908000 | orchestrator | 2025-04-10 00:43:14.911607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:14.912252 | orchestrator | Thursday 10 April 2025 00:43:14 +0000 (0:00:00.778) 0:00:11.035 ******** 2025-04-10 00:43:15.125274 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:15.126330 | orchestrator | 2025-04-10 00:43:15.126372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:15.127362 | orchestrator | Thursday 10 April 2025 00:43:15 +0000 (0:00:00.223) 0:00:11.258 ******** 2025-04-10 00:43:15.357330 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:15.359348 | orchestrator | 2025-04-10 00:43:15.359394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:15.360281 | orchestrator | Thursday 10 April 2025 00:43:15 +0000 (0:00:00.231) 0:00:11.490 ******** 2025-04-10 00:43:15.599713 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:15.601285 | orchestrator | 2025-04-10 00:43:15.602906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:15.605293 | orchestrator | Thursday 10 April 2025 00:43:15 +0000 (0:00:00.242) 0:00:11.733 ******** 2025-04-10 00:43:15.827140 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:15.829364 | orchestrator | 2025-04-10 00:43:15.829843 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-10 00:43:15.830076 | orchestrator | Thursday 10 April 2025 00:43:15 +0000 (0:00:00.227) 0:00:11.960 ******** 2025-04-10 00:43:16.139894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-10 00:43:16.142561 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-10 00:43:16.143665 | orchestrator | 2025-04-10 00:43:16.144919 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-10 00:43:16.145606 | orchestrator | Thursday 10 April 2025 00:43:16 +0000 (0:00:00.312) 0:00:12.272 ******** 2025-04-10 00:43:16.510415 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:16.699508 | orchestrator | 2025-04-10 00:43:16.699621 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-10 00:43:16.699640 | orchestrator | Thursday 10 April 2025 00:43:16 +0000 (0:00:00.366) 0:00:12.639 ******** 2025-04-10 00:43:16.699671 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:16.699996 | orchestrator | 2025-04-10 00:43:16.700706 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-10 00:43:16.701452 | orchestrator | Thursday 10 April 2025 00:43:16 +0000 (0:00:00.194) 0:00:12.833 ******** 2025-04-10 00:43:16.921578 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:16.929464 | orchestrator | 2025-04-10 00:43:16.934362 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-10 00:43:16.935737 | orchestrator | Thursday 10 April 2025 00:43:16 +0000 (0:00:00.213) 0:00:13.047 ******** 2025-04-10 00:43:17.084805 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:43:17.085723 | orchestrator | 2025-04-10 00:43:17.085762 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-10 00:43:17.085789 | orchestrator | Thursday 10 April 2025 00:43:17 +0000 (0:00:00.167) 0:00:13.214 ******** 2025-04-10 00:43:17.286331 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7af0ad6a-7281-507c-97d1-7760f3587d37'}}) 2025-04-10 00:43:17.286556 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52286b97-e205-54c6-a29d-cc3afdc4b583'}}) 2025-04-10 00:43:17.288312 | orchestrator | 2025-04-10 00:43:17.288381 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-10 00:43:17.288812 | orchestrator | Thursday 10 April 2025 00:43:17 +0000 (0:00:00.205) 0:00:13.420 ******** 2025-04-10 00:43:17.498638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7af0ad6a-7281-507c-97d1-7760f3587d37'}})  2025-04-10 00:43:17.502942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52286b97-e205-54c6-a29d-cc3afdc4b583'}})  2025-04-10 00:43:17.503449 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:17.503481 | orchestrator | 2025-04-10 00:43:17.504895 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-10 00:43:17.505487 | orchestrator | Thursday 10 April 2025 00:43:17 +0000 (0:00:00.212) 0:00:13.632 ******** 2025-04-10 00:43:17.676883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7af0ad6a-7281-507c-97d1-7760f3587d37'}})  2025-04-10 00:43:17.677642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52286b97-e205-54c6-a29d-cc3afdc4b583'}})  2025-04-10 00:43:17.678126 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:17.678797 | orchestrator | 2025-04-10 00:43:17.679468 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-10 00:43:17.838886 | orchestrator | Thursday 10 April 2025 00:43:17 +0000 (0:00:00.178) 0:00:13.811 ******** 2025-04-10 00:43:17.839053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7af0ad6a-7281-507c-97d1-7760f3587d37'}})  2025-04-10 00:43:17.840405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52286b97-e205-54c6-a29d-cc3afdc4b583'}})  2025-04-10 00:43:17.845447 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:18.065059 | orchestrator | 2025-04-10 00:43:18.065173 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-10 00:43:18.065192 | orchestrator | Thursday 10 April 2025 00:43:17 +0000 (0:00:00.159) 0:00:13.971 ******** 2025-04-10 00:43:18.065222 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:43:18.228814 | orchestrator | 2025-04-10 00:43:18.228910 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-10 00:43:18.228925 | orchestrator | Thursday 10 April 2025 00:43:18 +0000 (0:00:00.222) 0:00:14.193 ******** 2025-04-10 00:43:18.228947 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:43:18.229075 | orchestrator | 2025-04-10 00:43:18.229416 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-10 00:43:18.229437 | orchestrator | Thursday 10 April 2025 00:43:18 +0000 (0:00:00.170) 0:00:14.363 ******** 2025-04-10 00:43:18.465014 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:18.465279 | orchestrator | 2025-04-10 00:43:18.465815 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-10 00:43:18.466435 | orchestrator | Thursday 10 April 2025 00:43:18 +0000 (0:00:00.235) 0:00:14.598 ******** 2025-04-10 00:43:18.979837 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:18.981980 | orchestrator | 2025-04-10 00:43:18.983376 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-10 00:43:18.984930 | orchestrator | Thursday 10 April 2025 00:43:18 +0000 (0:00:00.513) 0:00:15.112 ******** 2025-04-10 00:43:19.173349 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:19.175834 | orchestrator | 2025-04-10 00:43:19.177075 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-10 00:43:19.177143 | orchestrator | Thursday 10 April 2025 00:43:19 +0000 (0:00:00.189) 0:00:15.301 ******** 2025-04-10 00:43:19.330783 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 00:43:19.331666 | orchestrator |  "ceph_osd_devices": { 2025-04-10 00:43:19.332010 | orchestrator |  "sdb": { 2025-04-10 00:43:19.332841 | orchestrator |  "osd_lvm_uuid": "7af0ad6a-7281-507c-97d1-7760f3587d37" 2025-04-10 00:43:19.333529 | orchestrator |  }, 2025-04-10 00:43:19.334717 | orchestrator |  "sdc": { 2025-04-10 00:43:19.335534 | orchestrator |  "osd_lvm_uuid": "52286b97-e205-54c6-a29d-cc3afdc4b583" 2025-04-10 00:43:19.336093 | orchestrator |  } 2025-04-10 00:43:19.336941 | orchestrator |  } 2025-04-10 00:43:19.337225 | orchestrator | } 2025-04-10 00:43:19.337938 | orchestrator | 2025-04-10 00:43:19.338756 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-10 00:43:19.339381 | orchestrator | Thursday 10 April 2025 00:43:19 +0000 (0:00:00.162) 0:00:15.464 ******** 2025-04-10 00:43:19.488814 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:19.489490 | orchestrator | 2025-04-10 00:43:19.490479 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-10 00:43:19.490816 | orchestrator | Thursday 10 April 2025 00:43:19 +0000 (0:00:00.158) 0:00:15.622 ******** 2025-04-10 00:43:19.637292 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:19.637851 | orchestrator | 2025-04-10 00:43:19.638849 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-10 00:43:19.639427 | orchestrator | Thursday 10 April 2025 00:43:19 +0000 (0:00:00.148) 0:00:15.771 ******** 2025-04-10 00:43:19.774246 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:43:19.774630 | orchestrator | 2025-04-10 00:43:19.774670 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-10 00:43:19.775014 | orchestrator | Thursday 10 April 2025 00:43:19 +0000 (0:00:00.136) 0:00:15.907 ******** 2025-04-10 00:43:20.078471 | orchestrator | changed: [testbed-node-3] => { 2025-04-10 00:43:20.079087 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-10 00:43:20.079657 | orchestrator |  "ceph_osd_devices": { 2025-04-10 00:43:20.080505 | orchestrator |  "sdb": { 2025-04-10 00:43:20.080985 | orchestrator |  "osd_lvm_uuid": "7af0ad6a-7281-507c-97d1-7760f3587d37" 2025-04-10 00:43:20.081713 | orchestrator |  }, 2025-04-10 00:43:20.082287 | orchestrator |  "sdc": { 2025-04-10 00:43:20.085646 | orchestrator |  "osd_lvm_uuid": "52286b97-e205-54c6-a29d-cc3afdc4b583" 2025-04-10 00:43:20.086140 | orchestrator |  } 2025-04-10 00:43:20.087292 | orchestrator |  }, 2025-04-10 00:43:20.087512 | orchestrator |  "lvm_volumes": [ 2025-04-10 00:43:20.088281 | orchestrator |  { 2025-04-10 00:43:20.088891 | orchestrator |  "data": "osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37", 2025-04-10 00:43:20.089380 | orchestrator |  "data_vg": "ceph-7af0ad6a-7281-507c-97d1-7760f3587d37" 2025-04-10 00:43:20.090183 | orchestrator |  }, 2025-04-10 00:43:20.091364 | orchestrator |  { 2025-04-10 00:43:20.092259 | orchestrator |  "data": "osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583", 2025-04-10 00:43:20.093280 | orchestrator |  "data_vg": "ceph-52286b97-e205-54c6-a29d-cc3afdc4b583" 2025-04-10 00:43:20.095045 | orchestrator |  } 2025-04-10 00:43:20.096064 | orchestrator |  ] 2025-04-10 00:43:20.096646 | orchestrator |  } 2025-04-10 00:43:20.097564 | orchestrator | } 2025-04-10 00:43:20.098166 | orchestrator | 2025-04-10 00:43:20.098820 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-10 00:43:20.100616 | orchestrator | Thursday 10 April 2025 00:43:20 +0000 (0:00:00.305) 0:00:16.212 ******** 2025-04-10 00:43:22.385610 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-10 00:43:22.386597 | orchestrator | 2025-04-10 00:43:22.389429 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-10 00:43:22.389860 | orchestrator | 2025-04-10 00:43:22.390337 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-10 00:43:22.391180 | orchestrator | Thursday 10 April 2025 00:43:22 +0000 (0:00:02.304) 0:00:18.517 ******** 2025-04-10 00:43:22.680133 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-10 00:43:22.680362 | orchestrator | 2025-04-10 00:43:22.680827 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-10 00:43:22.683498 | orchestrator | Thursday 10 April 2025 00:43:22 +0000 (0:00:00.296) 0:00:18.813 ******** 2025-04-10 00:43:22.946909 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:43:22.947891 | orchestrator | 2025-04-10 00:43:22.947939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:22.948024 | orchestrator | Thursday 10 April 2025 00:43:22 +0000 (0:00:00.263) 0:00:19.077 ******** 2025-04-10 00:43:23.400561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-10 00:43:23.401290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-10 00:43:23.401865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-10 00:43:23.402672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-10 00:43:23.403070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-10 00:43:23.403840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-10 00:43:23.405249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-10 00:43:23.405329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-10 00:43:23.406301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-10 00:43:23.407556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-10 00:43:23.408667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-10 00:43:23.409140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-10 00:43:23.411925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-10 00:43:23.612925 | orchestrator | 2025-04-10 00:43:23.613115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:23.613137 | orchestrator | Thursday 10 April 2025 00:43:23 +0000 (0:00:00.455) 0:00:19.533 ******** 2025-04-10 00:43:23.613168 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:23.613267 | orchestrator | 2025-04-10 00:43:23.613291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:23.616615 | orchestrator | Thursday 10 April 2025 00:43:23 +0000 (0:00:00.212) 0:00:19.746 ******** 2025-04-10 00:43:23.832493 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:23.833101 | orchestrator | 2025-04-10 00:43:23.833144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:23.833234 | orchestrator | Thursday 10 April 2025 00:43:23 +0000 (0:00:00.220) 0:00:19.966 ******** 2025-04-10 00:43:24.482408 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:24.482626 | orchestrator | 2025-04-10 00:43:24.483224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:24.484234 | orchestrator | Thursday 10 April 2025 00:43:24 +0000 (0:00:00.648) 0:00:20.614 ******** 2025-04-10 00:43:24.689928 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:24.690300 | orchestrator | 2025-04-10 00:43:24.692732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:24.693921 | orchestrator | Thursday 10 April 2025 00:43:24 +0000 (0:00:00.206) 0:00:20.820 ******** 2025-04-10 00:43:24.906293 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:24.907029 | orchestrator | 2025-04-10 00:43:24.911046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:25.136137 | orchestrator | Thursday 10 April 2025 00:43:24 +0000 (0:00:00.217) 0:00:21.038 ******** 2025-04-10 00:43:25.136274 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:25.136822 | orchestrator | 2025-04-10 00:43:25.138285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:25.140038 | orchestrator | Thursday 10 April 2025 00:43:25 +0000 (0:00:00.226) 0:00:21.265 ******** 2025-04-10 00:43:25.350136 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:25.350441 | orchestrator | 2025-04-10 00:43:25.350925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:25.351718 | orchestrator | Thursday 10 April 2025 00:43:25 +0000 (0:00:00.219) 0:00:21.484 ******** 2025-04-10 00:43:25.570880 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:25.572355 | orchestrator | 2025-04-10 00:43:25.572865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:25.572896 | orchestrator | Thursday 10 April 2025 00:43:25 +0000 (0:00:00.217) 0:00:21.701 ******** 2025-04-10 00:43:26.076505 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a) 2025-04-10 00:43:26.077699 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a) 2025-04-10 00:43:26.077849 | orchestrator | 2025-04-10 00:43:26.078556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:26.078663 | orchestrator | Thursday 10 April 2025 00:43:26 +0000 (0:00:00.507) 0:00:22.209 ******** 2025-04-10 00:43:26.531114 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6) 2025-04-10 00:43:26.531285 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6) 2025-04-10 00:43:26.532023 | orchestrator | 2025-04-10 00:43:26.532061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:26.532124 | orchestrator | Thursday 10 April 2025 00:43:26 +0000 (0:00:00.454) 0:00:22.664 ******** 2025-04-10 00:43:26.966608 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238) 2025-04-10 00:43:26.966994 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238) 2025-04-10 00:43:26.967396 | orchestrator | 2025-04-10 00:43:26.967764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:26.968149 | orchestrator | Thursday 10 April 2025 00:43:26 +0000 (0:00:00.434) 0:00:23.098 ******** 2025-04-10 00:43:27.671868 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a) 2025-04-10 00:43:27.672143 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a) 2025-04-10 00:43:27.672705 | orchestrator | 2025-04-10 00:43:27.672918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:27.673297 | orchestrator | Thursday 10 April 2025 00:43:27 +0000 (0:00:00.706) 0:00:23.804 ******** 2025-04-10 00:43:28.488466 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-10 00:43:28.490245 | orchestrator | 2025-04-10 00:43:28.491275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:28.491324 | orchestrator | Thursday 10 April 2025 00:43:28 +0000 (0:00:00.816) 0:00:24.621 ******** 2025-04-10 00:43:28.947330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-10 00:43:28.947458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-10 00:43:28.947473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-10 00:43:28.947946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-10 00:43:28.950888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-10 00:43:28.951281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-10 00:43:28.952753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-10 00:43:28.954488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-10 00:43:28.954830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-10 00:43:28.955316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-10 00:43:28.956295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-10 00:43:28.957378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-10 00:43:28.957859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-10 00:43:28.959001 | orchestrator | 2025-04-10 00:43:28.959802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:28.960560 | orchestrator | Thursday 10 April 2025 00:43:28 +0000 (0:00:00.457) 0:00:25.078 ******** 2025-04-10 00:43:29.162585 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:29.163077 | orchestrator | 2025-04-10 00:43:29.164317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:29.164883 | orchestrator | Thursday 10 April 2025 00:43:29 +0000 (0:00:00.218) 0:00:25.296 ******** 2025-04-10 00:43:29.393885 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:29.394595 | orchestrator | 2025-04-10 00:43:29.394877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:29.395646 | orchestrator | Thursday 10 April 2025 00:43:29 +0000 (0:00:00.231) 0:00:25.527 ******** 2025-04-10 00:43:29.600205 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:29.601635 | orchestrator | 2025-04-10 00:43:29.602636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:29.603454 | orchestrator | Thursday 10 April 2025 00:43:29 +0000 (0:00:00.204) 0:00:25.732 ******** 2025-04-10 00:43:29.845311 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:29.846177 | orchestrator | 2025-04-10 00:43:29.846226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:29.847424 | orchestrator | Thursday 10 April 2025 00:43:29 +0000 (0:00:00.246) 0:00:25.978 ******** 2025-04-10 00:43:30.053466 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:30.053917 | orchestrator | 2025-04-10 00:43:30.054447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:30.054864 | orchestrator | Thursday 10 April 2025 00:43:30 +0000 (0:00:00.208) 0:00:26.186 ******** 2025-04-10 00:43:30.285383 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:30.285538 | orchestrator | 2025-04-10 00:43:30.285868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:30.285901 | orchestrator | Thursday 10 April 2025 00:43:30 +0000 (0:00:00.232) 0:00:26.418 ******** 2025-04-10 00:43:30.504895 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:30.505110 | orchestrator | 2025-04-10 00:43:30.506077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:30.506342 | orchestrator | Thursday 10 April 2025 00:43:30 +0000 (0:00:00.219) 0:00:26.638 ******** 2025-04-10 00:43:30.733621 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:30.733854 | orchestrator | 2025-04-10 00:43:30.734001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:30.734453 | orchestrator | Thursday 10 April 2025 00:43:30 +0000 (0:00:00.227) 0:00:26.865 ******** 2025-04-10 00:43:31.700994 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-10 00:43:31.702246 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-10 00:43:31.705916 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-10 00:43:31.707736 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-10 00:43:31.708877 | orchestrator | 2025-04-10 00:43:31.710156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:31.711154 | orchestrator | Thursday 10 April 2025 00:43:31 +0000 (0:00:00.967) 0:00:27.832 ******** 2025-04-10 00:43:31.906591 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:31.907238 | orchestrator | 2025-04-10 00:43:31.908356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:31.909320 | orchestrator | Thursday 10 April 2025 00:43:31 +0000 (0:00:00.205) 0:00:28.038 ******** 2025-04-10 00:43:32.110718 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:32.111577 | orchestrator | 2025-04-10 00:43:32.111612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:32.111669 | orchestrator | Thursday 10 April 2025 00:43:32 +0000 (0:00:00.205) 0:00:28.243 ******** 2025-04-10 00:43:32.323885 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:32.324307 | orchestrator | 2025-04-10 00:43:32.325086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:32.325585 | orchestrator | Thursday 10 April 2025 00:43:32 +0000 (0:00:00.213) 0:00:28.457 ******** 2025-04-10 00:43:32.525781 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:32.526072 | orchestrator | 2025-04-10 00:43:32.526168 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-10 00:43:32.526741 | orchestrator | Thursday 10 April 2025 00:43:32 +0000 (0:00:00.200) 0:00:28.658 ******** 2025-04-10 00:43:32.731413 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-10 00:43:32.734403 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-10 00:43:32.881758 | orchestrator | 2025-04-10 00:43:32.881865 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-10 00:43:32.881883 | orchestrator | Thursday 10 April 2025 00:43:32 +0000 (0:00:00.205) 0:00:28.863 ******** 2025-04-10 00:43:32.881913 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:32.882089 | orchestrator | 2025-04-10 00:43:32.882598 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-10 00:43:32.883373 | orchestrator | Thursday 10 April 2025 00:43:32 +0000 (0:00:00.152) 0:00:29.015 ******** 2025-04-10 00:43:33.044228 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:33.044398 | orchestrator | 2025-04-10 00:43:33.045177 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-10 00:43:33.045764 | orchestrator | Thursday 10 April 2025 00:43:33 +0000 (0:00:00.162) 0:00:29.177 ******** 2025-04-10 00:43:33.193470 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:33.193630 | orchestrator | 2025-04-10 00:43:33.193652 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-10 00:43:33.193672 | orchestrator | Thursday 10 April 2025 00:43:33 +0000 (0:00:00.148) 0:00:29.326 ******** 2025-04-10 00:43:33.339046 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:43:33.340975 | orchestrator | 2025-04-10 00:43:33.342157 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-10 00:43:33.342190 | orchestrator | Thursday 10 April 2025 00:43:33 +0000 (0:00:00.142) 0:00:29.468 ******** 2025-04-10 00:43:33.523893 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e6570ad4-669c-53e9-93b8-24292f6b58fb'}}) 2025-04-10 00:43:33.524325 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '543b72d2-41b4-5023-b438-6662cb79109c'}}) 2025-04-10 00:43:33.525200 | orchestrator | 2025-04-10 00:43:33.525757 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-10 00:43:33.526777 | orchestrator | Thursday 10 April 2025 00:43:33 +0000 (0:00:00.188) 0:00:29.657 ******** 2025-04-10 00:43:33.901414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e6570ad4-669c-53e9-93b8-24292f6b58fb'}})  2025-04-10 00:43:33.901889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '543b72d2-41b4-5023-b438-6662cb79109c'}})  2025-04-10 00:43:33.904908 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:33.905371 | orchestrator | 2025-04-10 00:43:33.906375 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-10 00:43:33.906677 | orchestrator | Thursday 10 April 2025 00:43:33 +0000 (0:00:00.376) 0:00:30.033 ******** 2025-04-10 00:43:34.100284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e6570ad4-669c-53e9-93b8-24292f6b58fb'}})  2025-04-10 00:43:34.100940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '543b72d2-41b4-5023-b438-6662cb79109c'}})  2025-04-10 00:43:34.103070 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:34.104423 | orchestrator | 2025-04-10 00:43:34.105375 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-10 00:43:34.106528 | orchestrator | Thursday 10 April 2025 00:43:34 +0000 (0:00:00.199) 0:00:30.232 ******** 2025-04-10 00:43:34.277247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e6570ad4-669c-53e9-93b8-24292f6b58fb'}})  2025-04-10 00:43:34.277582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '543b72d2-41b4-5023-b438-6662cb79109c'}})  2025-04-10 00:43:34.278152 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:34.278996 | orchestrator | 2025-04-10 00:43:34.279138 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-10 00:43:34.280234 | orchestrator | Thursday 10 April 2025 00:43:34 +0000 (0:00:00.178) 0:00:30.411 ******** 2025-04-10 00:43:34.435782 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:43:34.436323 | orchestrator | 2025-04-10 00:43:34.437139 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-10 00:43:34.437799 | orchestrator | Thursday 10 April 2025 00:43:34 +0000 (0:00:00.157) 0:00:30.568 ******** 2025-04-10 00:43:34.587582 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:43:34.587985 | orchestrator | 2025-04-10 00:43:34.588047 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-10 00:43:34.588534 | orchestrator | Thursday 10 April 2025 00:43:34 +0000 (0:00:00.151) 0:00:30.719 ******** 2025-04-10 00:43:34.731492 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:34.732281 | orchestrator | 2025-04-10 00:43:34.732777 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-10 00:43:34.733280 | orchestrator | Thursday 10 April 2025 00:43:34 +0000 (0:00:00.144) 0:00:30.864 ******** 2025-04-10 00:43:34.867596 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:34.867783 | orchestrator | 2025-04-10 00:43:34.868093 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-10 00:43:34.868476 | orchestrator | Thursday 10 April 2025 00:43:34 +0000 (0:00:00.136) 0:00:31.000 ******** 2025-04-10 00:43:35.019537 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:35.019748 | orchestrator | 2025-04-10 00:43:35.020844 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-10 00:43:35.021519 | orchestrator | Thursday 10 April 2025 00:43:35 +0000 (0:00:00.151) 0:00:31.152 ******** 2025-04-10 00:43:35.172841 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 00:43:35.173703 | orchestrator |  "ceph_osd_devices": { 2025-04-10 00:43:35.174979 | orchestrator |  "sdb": { 2025-04-10 00:43:35.177745 | orchestrator |  "osd_lvm_uuid": "e6570ad4-669c-53e9-93b8-24292f6b58fb" 2025-04-10 00:43:35.178114 | orchestrator |  }, 2025-04-10 00:43:35.178146 | orchestrator |  "sdc": { 2025-04-10 00:43:35.179275 | orchestrator |  "osd_lvm_uuid": "543b72d2-41b4-5023-b438-6662cb79109c" 2025-04-10 00:43:35.179639 | orchestrator |  } 2025-04-10 00:43:35.180088 | orchestrator |  } 2025-04-10 00:43:35.181266 | orchestrator | } 2025-04-10 00:43:35.181703 | orchestrator | 2025-04-10 00:43:35.182475 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-10 00:43:35.183093 | orchestrator | Thursday 10 April 2025 00:43:35 +0000 (0:00:00.152) 0:00:31.305 ******** 2025-04-10 00:43:35.308794 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:35.309246 | orchestrator | 2025-04-10 00:43:35.309824 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-10 00:43:35.310342 | orchestrator | Thursday 10 April 2025 00:43:35 +0000 (0:00:00.137) 0:00:31.442 ******** 2025-04-10 00:43:35.446550 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:35.447341 | orchestrator | 2025-04-10 00:43:35.448061 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-10 00:43:35.449540 | orchestrator | Thursday 10 April 2025 00:43:35 +0000 (0:00:00.137) 0:00:31.580 ******** 2025-04-10 00:43:35.574720 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:43:35.576079 | orchestrator | 2025-04-10 00:43:35.576647 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-10 00:43:35.576678 | orchestrator | Thursday 10 April 2025 00:43:35 +0000 (0:00:00.127) 0:00:31.707 ******** 2025-04-10 00:43:36.075101 | orchestrator | changed: [testbed-node-4] => { 2025-04-10 00:43:36.075502 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-10 00:43:36.075524 | orchestrator |  "ceph_osd_devices": { 2025-04-10 00:43:36.076063 | orchestrator |  "sdb": { 2025-04-10 00:43:36.077607 | orchestrator |  "osd_lvm_uuid": "e6570ad4-669c-53e9-93b8-24292f6b58fb" 2025-04-10 00:43:36.078428 | orchestrator |  }, 2025-04-10 00:43:36.079151 | orchestrator |  "sdc": { 2025-04-10 00:43:36.079533 | orchestrator |  "osd_lvm_uuid": "543b72d2-41b4-5023-b438-6662cb79109c" 2025-04-10 00:43:36.079879 | orchestrator |  } 2025-04-10 00:43:36.080621 | orchestrator |  }, 2025-04-10 00:43:36.080746 | orchestrator |  "lvm_volumes": [ 2025-04-10 00:43:36.081247 | orchestrator |  { 2025-04-10 00:43:36.081892 | orchestrator |  "data": "osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb", 2025-04-10 00:43:36.082279 | orchestrator |  "data_vg": "ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb" 2025-04-10 00:43:36.082295 | orchestrator |  }, 2025-04-10 00:43:36.082611 | orchestrator |  { 2025-04-10 00:43:36.083188 | orchestrator |  "data": "osd-block-543b72d2-41b4-5023-b438-6662cb79109c", 2025-04-10 00:43:36.083307 | orchestrator |  "data_vg": "ceph-543b72d2-41b4-5023-b438-6662cb79109c" 2025-04-10 00:43:36.083320 | orchestrator |  } 2025-04-10 00:43:36.083737 | orchestrator |  ] 2025-04-10 00:43:36.083964 | orchestrator |  } 2025-04-10 00:43:36.084205 | orchestrator | } 2025-04-10 00:43:36.084444 | orchestrator | 2025-04-10 00:43:36.084759 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-10 00:43:36.085527 | orchestrator | Thursday 10 April 2025 00:43:36 +0000 (0:00:00.497) 0:00:32.205 ******** 2025-04-10 00:43:37.509993 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-10 00:43:37.511639 | orchestrator | 2025-04-10 00:43:37.512711 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-10 00:43:37.514287 | orchestrator | 2025-04-10 00:43:37.515330 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-10 00:43:37.516067 | orchestrator | Thursday 10 April 2025 00:43:37 +0000 (0:00:01.432) 0:00:33.638 ******** 2025-04-10 00:43:38.118760 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-10 00:43:38.118936 | orchestrator | 2025-04-10 00:43:38.120106 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-10 00:43:38.120902 | orchestrator | Thursday 10 April 2025 00:43:38 +0000 (0:00:00.611) 0:00:34.249 ******** 2025-04-10 00:43:38.352656 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:43:38.353842 | orchestrator | 2025-04-10 00:43:38.353868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:38.354159 | orchestrator | Thursday 10 April 2025 00:43:38 +0000 (0:00:00.236) 0:00:34.486 ******** 2025-04-10 00:43:38.763893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-10 00:43:38.764112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-10 00:43:38.764181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-10 00:43:38.765448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-10 00:43:38.766104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-10 00:43:38.766342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-10 00:43:38.768313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-10 00:43:38.768804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-10 00:43:38.768827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-10 00:43:38.768841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-10 00:43:38.768859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-10 00:43:38.769637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-10 00:43:38.769818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-10 00:43:38.769843 | orchestrator | 2025-04-10 00:43:38.770102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:38.770342 | orchestrator | Thursday 10 April 2025 00:43:38 +0000 (0:00:00.410) 0:00:34.896 ******** 2025-04-10 00:43:38.978351 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:38.980513 | orchestrator | 2025-04-10 00:43:38.981420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:38.982186 | orchestrator | Thursday 10 April 2025 00:43:38 +0000 (0:00:00.213) 0:00:35.110 ******** 2025-04-10 00:43:39.167003 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:39.167481 | orchestrator | 2025-04-10 00:43:39.167530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:39.168096 | orchestrator | Thursday 10 April 2025 00:43:39 +0000 (0:00:00.188) 0:00:35.299 ******** 2025-04-10 00:43:39.410534 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:39.412054 | orchestrator | 2025-04-10 00:43:39.413987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:39.605272 | orchestrator | Thursday 10 April 2025 00:43:39 +0000 (0:00:00.243) 0:00:35.542 ******** 2025-04-10 00:43:39.605404 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:39.606420 | orchestrator | 2025-04-10 00:43:39.607258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:39.607284 | orchestrator | Thursday 10 April 2025 00:43:39 +0000 (0:00:00.196) 0:00:35.739 ******** 2025-04-10 00:43:39.831539 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:39.832472 | orchestrator | 2025-04-10 00:43:39.833777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:39.834848 | orchestrator | Thursday 10 April 2025 00:43:39 +0000 (0:00:00.224) 0:00:35.963 ******** 2025-04-10 00:43:40.041603 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:40.043107 | orchestrator | 2025-04-10 00:43:40.044217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:40.044627 | orchestrator | Thursday 10 April 2025 00:43:40 +0000 (0:00:00.210) 0:00:36.174 ******** 2025-04-10 00:43:40.247392 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:40.247750 | orchestrator | 2025-04-10 00:43:40.248201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:40.248573 | orchestrator | Thursday 10 April 2025 00:43:40 +0000 (0:00:00.206) 0:00:36.380 ******** 2025-04-10 00:43:40.874356 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:40.874532 | orchestrator | 2025-04-10 00:43:40.875098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:40.875553 | orchestrator | Thursday 10 April 2025 00:43:40 +0000 (0:00:00.625) 0:00:37.006 ******** 2025-04-10 00:43:41.312759 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817) 2025-04-10 00:43:41.314064 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817) 2025-04-10 00:43:41.314549 | orchestrator | 2025-04-10 00:43:41.314576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:41.314599 | orchestrator | Thursday 10 April 2025 00:43:41 +0000 (0:00:00.440) 0:00:37.446 ******** 2025-04-10 00:43:41.847516 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1) 2025-04-10 00:43:41.849584 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1) 2025-04-10 00:43:41.850507 | orchestrator | 2025-04-10 00:43:41.851199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:41.851911 | orchestrator | Thursday 10 April 2025 00:43:41 +0000 (0:00:00.530) 0:00:37.977 ******** 2025-04-10 00:43:42.276232 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644) 2025-04-10 00:43:42.276426 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644) 2025-04-10 00:43:42.276902 | orchestrator | 2025-04-10 00:43:42.276938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:42.277198 | orchestrator | Thursday 10 April 2025 00:43:42 +0000 (0:00:00.432) 0:00:38.410 ******** 2025-04-10 00:43:42.722181 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172) 2025-04-10 00:43:42.723279 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172) 2025-04-10 00:43:42.726643 | orchestrator | 2025-04-10 00:43:42.727667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:43:42.728553 | orchestrator | Thursday 10 April 2025 00:43:42 +0000 (0:00:00.444) 0:00:38.854 ******** 2025-04-10 00:43:43.113855 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-10 00:43:43.114100 | orchestrator | 2025-04-10 00:43:43.114804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:43.115298 | orchestrator | Thursday 10 April 2025 00:43:43 +0000 (0:00:00.390) 0:00:39.245 ******** 2025-04-10 00:43:43.535658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-10 00:43:43.535813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-10 00:43:43.535837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-10 00:43:43.536857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-10 00:43:43.537554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-10 00:43:43.538192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-10 00:43:43.539183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-10 00:43:43.540286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-10 00:43:43.540711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-10 00:43:43.541189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-10 00:43:43.542584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-10 00:43:43.543166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-10 00:43:43.543783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-10 00:43:43.544090 | orchestrator | 2025-04-10 00:43:43.544997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:43.545869 | orchestrator | Thursday 10 April 2025 00:43:43 +0000 (0:00:00.423) 0:00:39.668 ******** 2025-04-10 00:43:43.752535 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:43.753258 | orchestrator | 2025-04-10 00:43:43.754417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:43.943358 | orchestrator | Thursday 10 April 2025 00:43:43 +0000 (0:00:00.217) 0:00:39.886 ******** 2025-04-10 00:43:43.943466 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:43.943549 | orchestrator | 2025-04-10 00:43:43.944125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:43.944693 | orchestrator | Thursday 10 April 2025 00:43:43 +0000 (0:00:00.190) 0:00:40.076 ******** 2025-04-10 00:43:44.573188 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:44.573435 | orchestrator | 2025-04-10 00:43:44.573470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:44.574073 | orchestrator | Thursday 10 April 2025 00:43:44 +0000 (0:00:00.630) 0:00:40.706 ******** 2025-04-10 00:43:44.798393 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:44.798761 | orchestrator | 2025-04-10 00:43:44.798789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:44.798811 | orchestrator | Thursday 10 April 2025 00:43:44 +0000 (0:00:00.223) 0:00:40.930 ******** 2025-04-10 00:43:45.027902 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:45.028894 | orchestrator | 2025-04-10 00:43:45.028934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:45.230817 | orchestrator | Thursday 10 April 2025 00:43:45 +0000 (0:00:00.231) 0:00:41.162 ******** 2025-04-10 00:43:45.230990 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:45.231184 | orchestrator | 2025-04-10 00:43:45.231598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:45.231625 | orchestrator | Thursday 10 April 2025 00:43:45 +0000 (0:00:00.201) 0:00:41.363 ******** 2025-04-10 00:43:45.439055 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:45.439237 | orchestrator | 2025-04-10 00:43:45.440804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:45.441863 | orchestrator | Thursday 10 April 2025 00:43:45 +0000 (0:00:00.208) 0:00:41.572 ******** 2025-04-10 00:43:45.650325 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:45.650781 | orchestrator | 2025-04-10 00:43:45.650817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:45.651204 | orchestrator | Thursday 10 April 2025 00:43:45 +0000 (0:00:00.211) 0:00:41.783 ******** 2025-04-10 00:43:46.371072 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-10 00:43:46.372506 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-10 00:43:46.373758 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-10 00:43:46.374133 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-10 00:43:46.374902 | orchestrator | 2025-04-10 00:43:46.375414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:46.376250 | orchestrator | Thursday 10 April 2025 00:43:46 +0000 (0:00:00.717) 0:00:42.500 ******** 2025-04-10 00:43:46.598279 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:46.598441 | orchestrator | 2025-04-10 00:43:46.598678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:46.599404 | orchestrator | Thursday 10 April 2025 00:43:46 +0000 (0:00:00.230) 0:00:42.731 ******** 2025-04-10 00:43:46.804452 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:46.804601 | orchestrator | 2025-04-10 00:43:46.805274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:46.806078 | orchestrator | Thursday 10 April 2025 00:43:46 +0000 (0:00:00.205) 0:00:42.937 ******** 2025-04-10 00:43:47.020822 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:47.021001 | orchestrator | 2025-04-10 00:43:47.021025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:43:47.021046 | orchestrator | Thursday 10 April 2025 00:43:47 +0000 (0:00:00.212) 0:00:43.150 ******** 2025-04-10 00:43:47.249196 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:47.249354 | orchestrator | 2025-04-10 00:43:47.249381 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-10 00:43:47.251115 | orchestrator | Thursday 10 April 2025 00:43:47 +0000 (0:00:00.232) 0:00:43.382 ******** 2025-04-10 00:43:47.668221 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-10 00:43:47.668392 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-10 00:43:47.668420 | orchestrator | 2025-04-10 00:43:47.669175 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-10 00:43:47.669538 | orchestrator | Thursday 10 April 2025 00:43:47 +0000 (0:00:00.420) 0:00:43.802 ******** 2025-04-10 00:43:47.811131 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:47.811835 | orchestrator | 2025-04-10 00:43:47.813050 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-10 00:43:47.815625 | orchestrator | Thursday 10 April 2025 00:43:47 +0000 (0:00:00.141) 0:00:43.944 ******** 2025-04-10 00:43:47.999899 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:48.003882 | orchestrator | 2025-04-10 00:43:48.004125 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-10 00:43:48.004159 | orchestrator | Thursday 10 April 2025 00:43:47 +0000 (0:00:00.185) 0:00:44.130 ******** 2025-04-10 00:43:48.174318 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:48.174788 | orchestrator | 2025-04-10 00:43:48.175743 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-10 00:43:48.176250 | orchestrator | Thursday 10 April 2025 00:43:48 +0000 (0:00:00.177) 0:00:44.307 ******** 2025-04-10 00:43:48.331295 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:43:48.332115 | orchestrator | 2025-04-10 00:43:48.333578 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-10 00:43:48.334712 | orchestrator | Thursday 10 April 2025 00:43:48 +0000 (0:00:00.156) 0:00:44.464 ******** 2025-04-10 00:43:48.523248 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47ce51ce-522f-5092-939d-97f529b04c78'}}) 2025-04-10 00:43:48.523443 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1024c186-728b-5ddc-b380-e3967fe3a792'}}) 2025-04-10 00:43:48.524346 | orchestrator | 2025-04-10 00:43:48.524945 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-10 00:43:48.525653 | orchestrator | Thursday 10 April 2025 00:43:48 +0000 (0:00:00.190) 0:00:44.655 ******** 2025-04-10 00:43:48.680651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47ce51ce-522f-5092-939d-97f529b04c78'}})  2025-04-10 00:43:48.683494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1024c186-728b-5ddc-b380-e3967fe3a792'}})  2025-04-10 00:43:48.684895 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:48.685345 | orchestrator | 2025-04-10 00:43:48.686172 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-10 00:43:48.686275 | orchestrator | Thursday 10 April 2025 00:43:48 +0000 (0:00:00.158) 0:00:44.813 ******** 2025-04-10 00:43:48.868076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47ce51ce-522f-5092-939d-97f529b04c78'}})  2025-04-10 00:43:48.868674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1024c186-728b-5ddc-b380-e3967fe3a792'}})  2025-04-10 00:43:48.869377 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:48.870753 | orchestrator | 2025-04-10 00:43:48.871158 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-10 00:43:48.872592 | orchestrator | Thursday 10 April 2025 00:43:48 +0000 (0:00:00.187) 0:00:45.001 ******** 2025-04-10 00:43:49.039396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47ce51ce-522f-5092-939d-97f529b04c78'}})  2025-04-10 00:43:49.039613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1024c186-728b-5ddc-b380-e3967fe3a792'}})  2025-04-10 00:43:49.039828 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:49.041158 | orchestrator | 2025-04-10 00:43:49.044177 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-10 00:43:49.191183 | orchestrator | Thursday 10 April 2025 00:43:49 +0000 (0:00:00.171) 0:00:45.172 ******** 2025-04-10 00:43:49.191317 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:43:49.192553 | orchestrator | 2025-04-10 00:43:49.193920 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-10 00:43:49.196238 | orchestrator | Thursday 10 April 2025 00:43:49 +0000 (0:00:00.152) 0:00:45.324 ******** 2025-04-10 00:43:49.337260 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:43:49.337703 | orchestrator | 2025-04-10 00:43:49.337751 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-10 00:43:49.338281 | orchestrator | Thursday 10 April 2025 00:43:49 +0000 (0:00:00.145) 0:00:45.470 ******** 2025-04-10 00:43:49.691374 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:49.692393 | orchestrator | 2025-04-10 00:43:49.694942 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-10 00:43:49.819263 | orchestrator | Thursday 10 April 2025 00:43:49 +0000 (0:00:00.352) 0:00:45.823 ******** 2025-04-10 00:43:49.819354 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:49.819638 | orchestrator | 2025-04-10 00:43:49.820576 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-10 00:43:49.821364 | orchestrator | Thursday 10 April 2025 00:43:49 +0000 (0:00:00.129) 0:00:45.952 ******** 2025-04-10 00:43:49.986877 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:49.987163 | orchestrator | 2025-04-10 00:43:49.988140 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-10 00:43:49.988933 | orchestrator | Thursday 10 April 2025 00:43:49 +0000 (0:00:00.165) 0:00:46.118 ******** 2025-04-10 00:43:50.127506 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 00:43:50.127677 | orchestrator |  "ceph_osd_devices": { 2025-04-10 00:43:50.128133 | orchestrator |  "sdb": { 2025-04-10 00:43:50.128877 | orchestrator |  "osd_lvm_uuid": "47ce51ce-522f-5092-939d-97f529b04c78" 2025-04-10 00:43:50.129565 | orchestrator |  }, 2025-04-10 00:43:50.130439 | orchestrator |  "sdc": { 2025-04-10 00:43:50.131440 | orchestrator |  "osd_lvm_uuid": "1024c186-728b-5ddc-b380-e3967fe3a792" 2025-04-10 00:43:50.132058 | orchestrator |  } 2025-04-10 00:43:50.133051 | orchestrator |  } 2025-04-10 00:43:50.133836 | orchestrator | } 2025-04-10 00:43:50.134445 | orchestrator | 2025-04-10 00:43:50.135282 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-10 00:43:50.135844 | orchestrator | Thursday 10 April 2025 00:43:50 +0000 (0:00:00.142) 0:00:46.261 ******** 2025-04-10 00:43:50.270488 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:50.270669 | orchestrator | 2025-04-10 00:43:50.272713 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-10 00:43:50.273828 | orchestrator | Thursday 10 April 2025 00:43:50 +0000 (0:00:00.142) 0:00:46.403 ******** 2025-04-10 00:43:50.422207 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:50.423228 | orchestrator | 2025-04-10 00:43:50.423971 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-10 00:43:50.426073 | orchestrator | Thursday 10 April 2025 00:43:50 +0000 (0:00:00.149) 0:00:46.552 ******** 2025-04-10 00:43:50.551541 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:43:50.551751 | orchestrator | 2025-04-10 00:43:50.552808 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-10 00:43:50.553541 | orchestrator | Thursday 10 April 2025 00:43:50 +0000 (0:00:00.132) 0:00:46.685 ******** 2025-04-10 00:43:50.866348 | orchestrator | changed: [testbed-node-5] => { 2025-04-10 00:43:50.866521 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-10 00:43:50.867714 | orchestrator |  "ceph_osd_devices": { 2025-04-10 00:43:50.869254 | orchestrator |  "sdb": { 2025-04-10 00:43:50.870143 | orchestrator |  "osd_lvm_uuid": "47ce51ce-522f-5092-939d-97f529b04c78" 2025-04-10 00:43:50.871068 | orchestrator |  }, 2025-04-10 00:43:50.871739 | orchestrator |  "sdc": { 2025-04-10 00:43:50.872706 | orchestrator |  "osd_lvm_uuid": "1024c186-728b-5ddc-b380-e3967fe3a792" 2025-04-10 00:43:50.873041 | orchestrator |  } 2025-04-10 00:43:50.874459 | orchestrator |  }, 2025-04-10 00:43:50.874914 | orchestrator |  "lvm_volumes": [ 2025-04-10 00:43:50.876142 | orchestrator |  { 2025-04-10 00:43:50.876912 | orchestrator |  "data": "osd-block-47ce51ce-522f-5092-939d-97f529b04c78", 2025-04-10 00:43:50.878084 | orchestrator |  "data_vg": "ceph-47ce51ce-522f-5092-939d-97f529b04c78" 2025-04-10 00:43:50.878769 | orchestrator |  }, 2025-04-10 00:43:50.879879 | orchestrator |  { 2025-04-10 00:43:50.880190 | orchestrator |  "data": "osd-block-1024c186-728b-5ddc-b380-e3967fe3a792", 2025-04-10 00:43:50.881085 | orchestrator |  "data_vg": "ceph-1024c186-728b-5ddc-b380-e3967fe3a792" 2025-04-10 00:43:50.881267 | orchestrator |  } 2025-04-10 00:43:50.882186 | orchestrator |  ] 2025-04-10 00:43:50.882848 | orchestrator |  } 2025-04-10 00:43:50.883255 | orchestrator | } 2025-04-10 00:43:50.883331 | orchestrator | 2025-04-10 00:43:50.884084 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-10 00:43:50.884473 | orchestrator | Thursday 10 April 2025 00:43:50 +0000 (0:00:00.314) 0:00:46.999 ******** 2025-04-10 00:43:52.205183 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-10 00:43:52.205358 | orchestrator | 2025-04-10 00:43:52.207154 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:43:52.207178 | orchestrator | 2025-04-10 00:43:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:43:52.208493 | orchestrator | 2025-04-10 00:43:52 | INFO  | Please wait and do not abort execution. 2025-04-10 00:43:52.208527 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-10 00:43:52.211051 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-10 00:43:52.211703 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-10 00:43:52.212699 | orchestrator | 2025-04-10 00:43:52.213418 | orchestrator | 2025-04-10 00:43:52.214732 | orchestrator | 2025-04-10 00:43:52.214914 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:43:52.216401 | orchestrator | Thursday 10 April 2025 00:43:52 +0000 (0:00:01.336) 0:00:48.336 ******** 2025-04-10 00:43:52.217507 | orchestrator | =============================================================================== 2025-04-10 00:43:52.218744 | orchestrator | Write configuration file ------------------------------------------------ 5.07s 2025-04-10 00:43:52.219166 | orchestrator | Add known links to the list of available block devices ------------------ 1.45s 2025-04-10 00:43:52.220175 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2025-04-10 00:43:52.221265 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.18s 2025-04-10 00:43:52.221899 | orchestrator | Print configuration data ------------------------------------------------ 1.12s 2025-04-10 00:43:52.222703 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-04-10 00:43:52.223302 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-04-10 00:43:52.223588 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.94s 2025-04-10 00:43:52.223988 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-04-10 00:43:52.224299 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-04-10 00:43:52.224866 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-04-10 00:43:52.225172 | orchestrator | Set WAL devices config data --------------------------------------------- 0.78s 2025-04-10 00:43:52.225768 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-04-10 00:43:52.226978 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.75s 2025-04-10 00:43:52.227395 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2025-04-10 00:43:52.227423 | orchestrator | Set DB devices config data ---------------------------------------------- 0.73s 2025-04-10 00:43:52.227987 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-04-10 00:43:52.228384 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-04-10 00:43:52.228877 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-04-10 00:43:52.229386 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.66s 2025-04-10 00:44:04.425433 | orchestrator | 2025-04-10 00:44:04 | INFO  | Task 7db894bf-4c98-450b-9155-d903886c5d58 is running in background. Output coming soon. 2025-04-10 00:44:31.057429 | orchestrator | 2025-04-10 00:44:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-04-10 00:44:32.822152 | orchestrator | 2025-04-10 00:44:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-04-10 00:44:32.822280 | orchestrator | 2025-04-10 00:44:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-04-10 00:44:32.822305 | orchestrator | 2025-04-10 00:44:22 | INFO  | Handling group overwrites in 99-overwrite 2025-04-10 00:44:32.822339 | orchestrator | 2025-04-10 00:44:22 | INFO  | Removing group ceph-mds from 50-ceph 2025-04-10 00:44:32.822369 | orchestrator | 2025-04-10 00:44:22 | INFO  | Removing group ceph-rgw from 50-ceph 2025-04-10 00:44:32.822381 | orchestrator | 2025-04-10 00:44:22 | INFO  | Removing group netbird:children from 50-infrastruture 2025-04-10 00:44:32.822392 | orchestrator | 2025-04-10 00:44:22 | INFO  | Removing group storage:children from 50-kolla 2025-04-10 00:44:32.822403 | orchestrator | 2025-04-10 00:44:22 | INFO  | Removing group frr:children from 60-generic 2025-04-10 00:44:32.822414 | orchestrator | 2025-04-10 00:44:22 | INFO  | Handling group overwrites in 20-roles 2025-04-10 00:44:32.822424 | orchestrator | 2025-04-10 00:44:22 | INFO  | Removing group k3s_node from 50-infrastruture 2025-04-10 00:44:32.822434 | orchestrator | 2025-04-10 00:44:22 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-04-10 00:44:32.822445 | orchestrator | 2025-04-10 00:44:30 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-04-10 00:44:32.822470 | orchestrator | 2025-04-10 00:44:32 | INFO  | Task 3b6f7de6-f3ec-4475-97ae-8725d41b077d (ceph-create-lvm-devices) was prepared for execution. 2025-04-10 00:44:36.017763 | orchestrator | 2025-04-10 00:44:32 | INFO  | It takes a moment until task 3b6f7de6-f3ec-4475-97ae-8725d41b077d (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-10 00:44:36.017899 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-10 00:44:36.594675 | orchestrator | 2025-04-10 00:44:36.595292 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-10 00:44:36.595842 | orchestrator | 2025-04-10 00:44:36.599888 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-10 00:44:36.828625 | orchestrator | Thursday 10 April 2025 00:44:36 +0000 (0:00:00.499) 0:00:00.499 ******** 2025-04-10 00:44:36.828799 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-10 00:44:36.828930 | orchestrator | 2025-04-10 00:44:36.830339 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-10 00:44:36.831081 | orchestrator | Thursday 10 April 2025 00:44:36 +0000 (0:00:00.235) 0:00:00.734 ******** 2025-04-10 00:44:37.056135 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:44:37.057407 | orchestrator | 2025-04-10 00:44:37.061335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:37.062050 | orchestrator | Thursday 10 April 2025 00:44:37 +0000 (0:00:00.227) 0:00:00.962 ******** 2025-04-10 00:44:37.836132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-10 00:44:37.836267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-10 00:44:37.836287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-10 00:44:37.836977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-10 00:44:37.837847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-10 00:44:37.838827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-10 00:44:37.839272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-10 00:44:37.839522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-10 00:44:37.839879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-10 00:44:37.840124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-10 00:44:37.844040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-10 00:44:37.844139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-10 00:44:37.844178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-10 00:44:37.844398 | orchestrator | 2025-04-10 00:44:37.844624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:37.844824 | orchestrator | Thursday 10 April 2025 00:44:37 +0000 (0:00:00.780) 0:00:01.742 ******** 2025-04-10 00:44:38.039145 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:38.039911 | orchestrator | 2025-04-10 00:44:38.041143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:38.041590 | orchestrator | Thursday 10 April 2025 00:44:38 +0000 (0:00:00.203) 0:00:01.946 ******** 2025-04-10 00:44:38.248825 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:38.250413 | orchestrator | 2025-04-10 00:44:38.490101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:38.490210 | orchestrator | Thursday 10 April 2025 00:44:38 +0000 (0:00:00.206) 0:00:02.152 ******** 2025-04-10 00:44:38.490242 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:38.490312 | orchestrator | 2025-04-10 00:44:38.491712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:38.495545 | orchestrator | Thursday 10 April 2025 00:44:38 +0000 (0:00:00.243) 0:00:02.396 ******** 2025-04-10 00:44:38.691235 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:38.692171 | orchestrator | 2025-04-10 00:44:38.692216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:38.692554 | orchestrator | Thursday 10 April 2025 00:44:38 +0000 (0:00:00.200) 0:00:02.596 ******** 2025-04-10 00:44:38.878421 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:38.879199 | orchestrator | 2025-04-10 00:44:38.880578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:38.881899 | orchestrator | Thursday 10 April 2025 00:44:38 +0000 (0:00:00.189) 0:00:02.785 ******** 2025-04-10 00:44:39.076600 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:39.077762 | orchestrator | 2025-04-10 00:44:39.079330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:39.283451 | orchestrator | Thursday 10 April 2025 00:44:39 +0000 (0:00:00.197) 0:00:02.983 ******** 2025-04-10 00:44:39.283585 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:39.283672 | orchestrator | 2025-04-10 00:44:39.283779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:39.284098 | orchestrator | Thursday 10 April 2025 00:44:39 +0000 (0:00:00.203) 0:00:03.186 ******** 2025-04-10 00:44:39.483063 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:39.483293 | orchestrator | 2025-04-10 00:44:39.487847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:39.488117 | orchestrator | Thursday 10 April 2025 00:44:39 +0000 (0:00:00.201) 0:00:03.387 ******** 2025-04-10 00:44:40.118299 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b) 2025-04-10 00:44:40.118515 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b) 2025-04-10 00:44:40.119255 | orchestrator | 2025-04-10 00:44:40.121372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:40.978880 | orchestrator | Thursday 10 April 2025 00:44:40 +0000 (0:00:00.637) 0:00:04.025 ******** 2025-04-10 00:44:40.979033 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7) 2025-04-10 00:44:40.980311 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7) 2025-04-10 00:44:40.982137 | orchestrator | 2025-04-10 00:44:40.982994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:40.986372 | orchestrator | Thursday 10 April 2025 00:44:40 +0000 (0:00:00.858) 0:00:04.884 ******** 2025-04-10 00:44:41.421680 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3) 2025-04-10 00:44:41.423812 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3) 2025-04-10 00:44:41.428499 | orchestrator | 2025-04-10 00:44:41.428789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:41.429555 | orchestrator | Thursday 10 April 2025 00:44:41 +0000 (0:00:00.443) 0:00:05.327 ******** 2025-04-10 00:44:41.910275 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2) 2025-04-10 00:44:41.917021 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2) 2025-04-10 00:44:41.917266 | orchestrator | 2025-04-10 00:44:41.921418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:44:41.922214 | orchestrator | Thursday 10 April 2025 00:44:41 +0000 (0:00:00.485) 0:00:05.813 ******** 2025-04-10 00:44:42.309214 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-10 00:44:42.311695 | orchestrator | 2025-04-10 00:44:42.804544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:42.804659 | orchestrator | Thursday 10 April 2025 00:44:42 +0000 (0:00:00.403) 0:00:06.216 ******** 2025-04-10 00:44:42.804695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-10 00:44:42.805107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-10 00:44:42.805834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-10 00:44:42.806926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-10 00:44:42.812663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-10 00:44:42.812766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-10 00:44:42.812787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-10 00:44:42.812802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-10 00:44:42.812817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-10 00:44:42.812832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-10 00:44:42.812846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-10 00:44:42.812861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-10 00:44:42.812875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-10 00:44:42.812890 | orchestrator | 2025-04-10 00:44:42.812911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:42.813574 | orchestrator | Thursday 10 April 2025 00:44:42 +0000 (0:00:00.494) 0:00:06.710 ******** 2025-04-10 00:44:43.017148 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:43.017595 | orchestrator | 2025-04-10 00:44:43.017642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:43.018660 | orchestrator | Thursday 10 April 2025 00:44:43 +0000 (0:00:00.211) 0:00:06.922 ******** 2025-04-10 00:44:43.224041 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:43.224811 | orchestrator | 2025-04-10 00:44:43.227519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:43.231337 | orchestrator | Thursday 10 April 2025 00:44:43 +0000 (0:00:00.206) 0:00:07.129 ******** 2025-04-10 00:44:43.441362 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:43.442633 | orchestrator | 2025-04-10 00:44:43.443383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:43.444460 | orchestrator | Thursday 10 April 2025 00:44:43 +0000 (0:00:00.219) 0:00:07.348 ******** 2025-04-10 00:44:43.660389 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:43.660778 | orchestrator | 2025-04-10 00:44:43.660818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:43.660945 | orchestrator | Thursday 10 April 2025 00:44:43 +0000 (0:00:00.218) 0:00:07.566 ******** 2025-04-10 00:44:44.288431 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:44.288716 | orchestrator | 2025-04-10 00:44:44.289572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:44.289841 | orchestrator | Thursday 10 April 2025 00:44:44 +0000 (0:00:00.629) 0:00:08.195 ******** 2025-04-10 00:44:44.504992 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:44.505201 | orchestrator | 2025-04-10 00:44:44.505876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:44.506092 | orchestrator | Thursday 10 April 2025 00:44:44 +0000 (0:00:00.216) 0:00:08.412 ******** 2025-04-10 00:44:44.714501 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:44.715270 | orchestrator | 2025-04-10 00:44:44.716365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:44.717602 | orchestrator | Thursday 10 April 2025 00:44:44 +0000 (0:00:00.207) 0:00:08.620 ******** 2025-04-10 00:44:44.932521 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:44.933830 | orchestrator | 2025-04-10 00:44:44.937264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:45.625750 | orchestrator | Thursday 10 April 2025 00:44:44 +0000 (0:00:00.218) 0:00:08.838 ******** 2025-04-10 00:44:45.625891 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-10 00:44:45.626449 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-10 00:44:45.626912 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-10 00:44:45.627598 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-10 00:44:45.629240 | orchestrator | 2025-04-10 00:44:45.629566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:45.630458 | orchestrator | Thursday 10 April 2025 00:44:45 +0000 (0:00:00.693) 0:00:09.532 ******** 2025-04-10 00:44:45.851832 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:45.852142 | orchestrator | 2025-04-10 00:44:45.852188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:45.853052 | orchestrator | Thursday 10 April 2025 00:44:45 +0000 (0:00:00.225) 0:00:09.758 ******** 2025-04-10 00:44:46.045433 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:46.045936 | orchestrator | 2025-04-10 00:44:46.046920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:46.047284 | orchestrator | Thursday 10 April 2025 00:44:46 +0000 (0:00:00.194) 0:00:09.952 ******** 2025-04-10 00:44:46.256134 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:46.256305 | orchestrator | 2025-04-10 00:44:46.259363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:44:46.260362 | orchestrator | Thursday 10 April 2025 00:44:46 +0000 (0:00:00.209) 0:00:10.161 ******** 2025-04-10 00:44:46.508264 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:46.508684 | orchestrator | 2025-04-10 00:44:46.509494 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-10 00:44:46.511134 | orchestrator | Thursday 10 April 2025 00:44:46 +0000 (0:00:00.253) 0:00:10.415 ******** 2025-04-10 00:44:46.646426 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:46.868287 | orchestrator | 2025-04-10 00:44:46.868402 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-10 00:44:46.868420 | orchestrator | Thursday 10 April 2025 00:44:46 +0000 (0:00:00.137) 0:00:10.552 ******** 2025-04-10 00:44:46.868450 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7af0ad6a-7281-507c-97d1-7760f3587d37'}}) 2025-04-10 00:44:46.869625 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52286b97-e205-54c6-a29d-cc3afdc4b583'}}) 2025-04-10 00:44:46.869657 | orchestrator | 2025-04-10 00:44:46.871289 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-10 00:44:49.124177 | orchestrator | Thursday 10 April 2025 00:44:46 +0000 (0:00:00.217) 0:00:10.770 ******** 2025-04-10 00:44:49.125385 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'}) 2025-04-10 00:44:49.125510 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'}) 2025-04-10 00:44:49.125537 | orchestrator | 2025-04-10 00:44:49.126118 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-10 00:44:49.128190 | orchestrator | Thursday 10 April 2025 00:44:49 +0000 (0:00:02.259) 0:00:13.030 ******** 2025-04-10 00:44:49.298146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:49.299088 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:49.299554 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:49.299606 | orchestrator | 2025-04-10 00:44:49.299747 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-10 00:44:49.300265 | orchestrator | Thursday 10 April 2025 00:44:49 +0000 (0:00:00.174) 0:00:13.204 ******** 2025-04-10 00:44:50.781340 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'}) 2025-04-10 00:44:50.782161 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'}) 2025-04-10 00:44:50.782211 | orchestrator | 2025-04-10 00:44:50.782295 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-10 00:44:50.782927 | orchestrator | Thursday 10 April 2025 00:44:50 +0000 (0:00:01.482) 0:00:14.687 ******** 2025-04-10 00:44:50.970450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:50.971136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:50.971664 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:50.972169 | orchestrator | 2025-04-10 00:44:50.972663 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-10 00:44:50.973234 | orchestrator | Thursday 10 April 2025 00:44:50 +0000 (0:00:00.189) 0:00:14.877 ******** 2025-04-10 00:44:51.122245 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:51.122583 | orchestrator | 2025-04-10 00:44:51.122620 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-10 00:44:51.122643 | orchestrator | Thursday 10 April 2025 00:44:51 +0000 (0:00:00.150) 0:00:15.028 ******** 2025-04-10 00:44:51.284397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:51.285115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:51.285731 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:51.288600 | orchestrator | 2025-04-10 00:44:51.422313 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-10 00:44:51.422423 | orchestrator | Thursday 10 April 2025 00:44:51 +0000 (0:00:00.161) 0:00:15.190 ******** 2025-04-10 00:44:51.422457 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:51.423430 | orchestrator | 2025-04-10 00:44:51.423461 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-10 00:44:51.423889 | orchestrator | Thursday 10 April 2025 00:44:51 +0000 (0:00:00.138) 0:00:15.329 ******** 2025-04-10 00:44:51.596230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:51.596400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:51.597037 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:51.597865 | orchestrator | 2025-04-10 00:44:51.598249 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-10 00:44:51.599038 | orchestrator | Thursday 10 April 2025 00:44:51 +0000 (0:00:00.173) 0:00:15.502 ******** 2025-04-10 00:44:51.752700 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:51.753211 | orchestrator | 2025-04-10 00:44:51.754156 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-10 00:44:51.754925 | orchestrator | Thursday 10 April 2025 00:44:51 +0000 (0:00:00.154) 0:00:15.657 ******** 2025-04-10 00:44:52.112230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:52.113483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:52.114698 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:52.117440 | orchestrator | 2025-04-10 00:44:52.268077 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-10 00:44:52.268199 | orchestrator | Thursday 10 April 2025 00:44:52 +0000 (0:00:00.361) 0:00:16.019 ******** 2025-04-10 00:44:52.268233 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:44:52.268308 | orchestrator | 2025-04-10 00:44:52.268655 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-10 00:44:52.269121 | orchestrator | Thursday 10 April 2025 00:44:52 +0000 (0:00:00.154) 0:00:16.173 ******** 2025-04-10 00:44:52.428567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:52.429173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:52.430168 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:52.431126 | orchestrator | 2025-04-10 00:44:52.432742 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-10 00:44:52.432847 | orchestrator | Thursday 10 April 2025 00:44:52 +0000 (0:00:00.161) 0:00:16.335 ******** 2025-04-10 00:44:52.614382 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:52.614820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:52.615639 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:52.617055 | orchestrator | 2025-04-10 00:44:52.617537 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-10 00:44:52.617934 | orchestrator | Thursday 10 April 2025 00:44:52 +0000 (0:00:00.185) 0:00:16.521 ******** 2025-04-10 00:44:52.777061 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:52.777278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:52.777714 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:52.778117 | orchestrator | 2025-04-10 00:44:52.778691 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-10 00:44:52.778815 | orchestrator | Thursday 10 April 2025 00:44:52 +0000 (0:00:00.162) 0:00:16.683 ******** 2025-04-10 00:44:52.907300 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:52.907837 | orchestrator | 2025-04-10 00:44:52.908203 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-10 00:44:52.908497 | orchestrator | Thursday 10 April 2025 00:44:52 +0000 (0:00:00.131) 0:00:16.815 ******** 2025-04-10 00:44:53.050759 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:53.051115 | orchestrator | 2025-04-10 00:44:53.051150 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-10 00:44:53.051636 | orchestrator | Thursday 10 April 2025 00:44:53 +0000 (0:00:00.142) 0:00:16.958 ******** 2025-04-10 00:44:53.190677 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:53.190886 | orchestrator | 2025-04-10 00:44:53.191329 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-10 00:44:53.191980 | orchestrator | Thursday 10 April 2025 00:44:53 +0000 (0:00:00.139) 0:00:17.097 ******** 2025-04-10 00:44:53.327337 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 00:44:53.327986 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-10 00:44:53.328709 | orchestrator | } 2025-04-10 00:44:53.331626 | orchestrator | 2025-04-10 00:44:53.332911 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-10 00:44:53.334083 | orchestrator | Thursday 10 April 2025 00:44:53 +0000 (0:00:00.134) 0:00:17.232 ******** 2025-04-10 00:44:53.469239 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 00:44:53.469769 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-10 00:44:53.470382 | orchestrator | } 2025-04-10 00:44:53.471006 | orchestrator | 2025-04-10 00:44:53.471482 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-10 00:44:53.473277 | orchestrator | Thursday 10 April 2025 00:44:53 +0000 (0:00:00.143) 0:00:17.376 ******** 2025-04-10 00:44:53.619698 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 00:44:53.620761 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-10 00:44:53.621558 | orchestrator | } 2025-04-10 00:44:53.622316 | orchestrator | 2025-04-10 00:44:53.623489 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-10 00:44:53.624112 | orchestrator | Thursday 10 April 2025 00:44:53 +0000 (0:00:00.150) 0:00:17.526 ******** 2025-04-10 00:44:54.788314 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:44:54.789293 | orchestrator | 2025-04-10 00:44:54.790669 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-10 00:44:54.791717 | orchestrator | Thursday 10 April 2025 00:44:54 +0000 (0:00:01.166) 0:00:18.693 ******** 2025-04-10 00:44:55.303487 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:44:55.303939 | orchestrator | 2025-04-10 00:44:55.304069 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-10 00:44:55.304498 | orchestrator | Thursday 10 April 2025 00:44:55 +0000 (0:00:00.517) 0:00:19.210 ******** 2025-04-10 00:44:55.865878 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:44:55.866745 | orchestrator | 2025-04-10 00:44:55.866935 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-10 00:44:55.867821 | orchestrator | Thursday 10 April 2025 00:44:55 +0000 (0:00:00.561) 0:00:19.772 ******** 2025-04-10 00:44:56.008999 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:44:56.013143 | orchestrator | 2025-04-10 00:44:56.014217 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-10 00:44:56.014282 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.142) 0:00:19.914 ******** 2025-04-10 00:44:56.146042 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:56.146486 | orchestrator | 2025-04-10 00:44:56.147149 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-10 00:44:56.148024 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.138) 0:00:20.053 ******** 2025-04-10 00:44:56.276236 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:56.276510 | orchestrator | 2025-04-10 00:44:56.277389 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-10 00:44:56.278276 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.129) 0:00:20.182 ******** 2025-04-10 00:44:56.441304 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 00:44:56.442325 | orchestrator |  "vgs_report": { 2025-04-10 00:44:56.443833 | orchestrator |  "vg": [] 2025-04-10 00:44:56.444812 | orchestrator |  } 2025-04-10 00:44:56.445422 | orchestrator | } 2025-04-10 00:44:56.446562 | orchestrator | 2025-04-10 00:44:56.446845 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-10 00:44:56.447292 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.164) 0:00:20.347 ******** 2025-04-10 00:44:56.569220 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:56.570668 | orchestrator | 2025-04-10 00:44:56.570904 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-10 00:44:56.572026 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.128) 0:00:20.475 ******** 2025-04-10 00:44:56.708209 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:56.708427 | orchestrator | 2025-04-10 00:44:56.708458 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-10 00:44:56.709151 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.139) 0:00:20.615 ******** 2025-04-10 00:44:56.876991 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:56.877218 | orchestrator | 2025-04-10 00:44:56.877860 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-10 00:44:56.877891 | orchestrator | Thursday 10 April 2025 00:44:56 +0000 (0:00:00.169) 0:00:20.784 ******** 2025-04-10 00:44:57.015544 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:57.016533 | orchestrator | 2025-04-10 00:44:57.016615 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-10 00:44:57.017208 | orchestrator | Thursday 10 April 2025 00:44:57 +0000 (0:00:00.136) 0:00:20.920 ******** 2025-04-10 00:44:57.379305 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:57.379544 | orchestrator | 2025-04-10 00:44:57.379598 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-10 00:44:57.518822 | orchestrator | Thursday 10 April 2025 00:44:57 +0000 (0:00:00.365) 0:00:21.286 ******** 2025-04-10 00:44:57.519032 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:57.519343 | orchestrator | 2025-04-10 00:44:57.519827 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-10 00:44:57.521843 | orchestrator | Thursday 10 April 2025 00:44:57 +0000 (0:00:00.140) 0:00:21.426 ******** 2025-04-10 00:44:57.671147 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:57.671319 | orchestrator | 2025-04-10 00:44:57.671564 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-10 00:44:57.672336 | orchestrator | Thursday 10 April 2025 00:44:57 +0000 (0:00:00.152) 0:00:21.578 ******** 2025-04-10 00:44:57.816247 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:57.816811 | orchestrator | 2025-04-10 00:44:57.818259 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-10 00:44:57.819061 | orchestrator | Thursday 10 April 2025 00:44:57 +0000 (0:00:00.143) 0:00:21.722 ******** 2025-04-10 00:44:57.968648 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:57.969070 | orchestrator | 2025-04-10 00:44:57.969744 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-10 00:44:57.972779 | orchestrator | Thursday 10 April 2025 00:44:57 +0000 (0:00:00.152) 0:00:21.874 ******** 2025-04-10 00:44:58.128862 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:58.129394 | orchestrator | 2025-04-10 00:44:58.133314 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-10 00:44:58.270402 | orchestrator | Thursday 10 April 2025 00:44:58 +0000 (0:00:00.160) 0:00:22.034 ******** 2025-04-10 00:44:58.270537 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:58.271559 | orchestrator | 2025-04-10 00:44:58.272110 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-10 00:44:58.272924 | orchestrator | Thursday 10 April 2025 00:44:58 +0000 (0:00:00.142) 0:00:22.177 ******** 2025-04-10 00:44:58.414531 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:58.415723 | orchestrator | 2025-04-10 00:44:58.416063 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-10 00:44:58.417123 | orchestrator | Thursday 10 April 2025 00:44:58 +0000 (0:00:00.143) 0:00:22.320 ******** 2025-04-10 00:44:58.551692 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:58.551889 | orchestrator | 2025-04-10 00:44:58.551922 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-10 00:44:58.552334 | orchestrator | Thursday 10 April 2025 00:44:58 +0000 (0:00:00.138) 0:00:22.459 ******** 2025-04-10 00:44:58.696019 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:58.696860 | orchestrator | 2025-04-10 00:44:58.697319 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-10 00:44:58.698175 | orchestrator | Thursday 10 April 2025 00:44:58 +0000 (0:00:00.143) 0:00:22.602 ******** 2025-04-10 00:44:58.879921 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:58.880195 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:58.880980 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:58.881101 | orchestrator | 2025-04-10 00:44:58.881558 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-10 00:44:58.882179 | orchestrator | Thursday 10 April 2025 00:44:58 +0000 (0:00:00.184) 0:00:22.787 ******** 2025-04-10 00:44:59.045015 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:59.045436 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:59.046333 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:59.049708 | orchestrator | 2025-04-10 00:44:59.050283 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-10 00:44:59.051385 | orchestrator | Thursday 10 April 2025 00:44:59 +0000 (0:00:00.164) 0:00:22.951 ******** 2025-04-10 00:44:59.445994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:59.446219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:59.447168 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:59.447834 | orchestrator | 2025-04-10 00:44:59.448692 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-10 00:44:59.449645 | orchestrator | Thursday 10 April 2025 00:44:59 +0000 (0:00:00.399) 0:00:23.351 ******** 2025-04-10 00:44:59.632833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:59.633755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:59.635135 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:59.637226 | orchestrator | 2025-04-10 00:44:59.638233 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-10 00:44:59.638268 | orchestrator | Thursday 10 April 2025 00:44:59 +0000 (0:00:00.188) 0:00:23.539 ******** 2025-04-10 00:44:59.807111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:59.807836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:59.807886 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:59.808640 | orchestrator | 2025-04-10 00:44:59.809084 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-10 00:44:59.810013 | orchestrator | Thursday 10 April 2025 00:44:59 +0000 (0:00:00.174) 0:00:23.714 ******** 2025-04-10 00:44:59.977577 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:44:59.977894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:44:59.978591 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:44:59.978917 | orchestrator | 2025-04-10 00:44:59.980499 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-10 00:44:59.981496 | orchestrator | Thursday 10 April 2025 00:44:59 +0000 (0:00:00.171) 0:00:23.885 ******** 2025-04-10 00:45:00.156109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:45:00.157297 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:45:00.158090 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:45:00.160181 | orchestrator | 2025-04-10 00:45:00.161322 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-10 00:45:00.162064 | orchestrator | Thursday 10 April 2025 00:45:00 +0000 (0:00:00.177) 0:00:24.062 ******** 2025-04-10 00:45:00.327338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:45:00.328479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:45:00.331862 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:45:00.332181 | orchestrator | 2025-04-10 00:45:00.332207 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-10 00:45:00.332246 | orchestrator | Thursday 10 April 2025 00:45:00 +0000 (0:00:00.171) 0:00:24.234 ******** 2025-04-10 00:45:00.853396 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:45:00.853893 | orchestrator | 2025-04-10 00:45:00.853934 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-10 00:45:00.854262 | orchestrator | Thursday 10 April 2025 00:45:00 +0000 (0:00:00.524) 0:00:24.758 ******** 2025-04-10 00:45:01.344738 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:45:01.345020 | orchestrator | 2025-04-10 00:45:01.345564 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-10 00:45:01.345811 | orchestrator | Thursday 10 April 2025 00:45:01 +0000 (0:00:00.493) 0:00:25.252 ******** 2025-04-10 00:45:01.515593 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:45:01.519413 | orchestrator | 2025-04-10 00:45:01.520475 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-10 00:45:01.520831 | orchestrator | Thursday 10 April 2025 00:45:01 +0000 (0:00:00.153) 0:00:25.405 ******** 2025-04-10 00:45:01.699447 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'vg_name': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'}) 2025-04-10 00:45:01.699669 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'vg_name': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'}) 2025-04-10 00:45:01.699889 | orchestrator | 2025-04-10 00:45:01.700364 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-10 00:45:01.700688 | orchestrator | Thursday 10 April 2025 00:45:01 +0000 (0:00:00.200) 0:00:25.606 ******** 2025-04-10 00:45:02.085545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:45:02.086098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:45:02.086152 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:45:02.086970 | orchestrator | 2025-04-10 00:45:02.088012 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-10 00:45:02.091168 | orchestrator | Thursday 10 April 2025 00:45:02 +0000 (0:00:00.385) 0:00:25.992 ******** 2025-04-10 00:45:02.273498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:45:02.273744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:45:02.274304 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:45:02.274925 | orchestrator | 2025-04-10 00:45:02.275712 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-10 00:45:02.276098 | orchestrator | Thursday 10 April 2025 00:45:02 +0000 (0:00:00.188) 0:00:26.180 ******** 2025-04-10 00:45:02.446778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'})  2025-04-10 00:45:02.447221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'})  2025-04-10 00:45:02.448121 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:45:02.448589 | orchestrator | 2025-04-10 00:45:02.449530 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-10 00:45:02.450335 | orchestrator | Thursday 10 April 2025 00:45:02 +0000 (0:00:00.172) 0:00:26.353 ******** 2025-04-10 00:45:03.193074 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 00:45:03.193419 | orchestrator |  "lvm_report": { 2025-04-10 00:45:03.194100 | orchestrator |  "lv": [ 2025-04-10 00:45:03.195035 | orchestrator |  { 2025-04-10 00:45:03.196042 | orchestrator |  "lv_name": "osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583", 2025-04-10 00:45:03.196908 | orchestrator |  "vg_name": "ceph-52286b97-e205-54c6-a29d-cc3afdc4b583" 2025-04-10 00:45:03.197367 | orchestrator |  }, 2025-04-10 00:45:03.198098 | orchestrator |  { 2025-04-10 00:45:03.199226 | orchestrator |  "lv_name": "osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37", 2025-04-10 00:45:03.199649 | orchestrator |  "vg_name": "ceph-7af0ad6a-7281-507c-97d1-7760f3587d37" 2025-04-10 00:45:03.200371 | orchestrator |  } 2025-04-10 00:45:03.201412 | orchestrator |  ], 2025-04-10 00:45:03.201635 | orchestrator |  "pv": [ 2025-04-10 00:45:03.202471 | orchestrator |  { 2025-04-10 00:45:03.202902 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-10 00:45:03.203659 | orchestrator |  "vg_name": "ceph-7af0ad6a-7281-507c-97d1-7760f3587d37" 2025-04-10 00:45:03.204702 | orchestrator |  }, 2025-04-10 00:45:03.205253 | orchestrator |  { 2025-04-10 00:45:03.205970 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-10 00:45:03.206979 | orchestrator |  "vg_name": "ceph-52286b97-e205-54c6-a29d-cc3afdc4b583" 2025-04-10 00:45:03.207617 | orchestrator |  } 2025-04-10 00:45:03.208130 | orchestrator |  ] 2025-04-10 00:45:03.209050 | orchestrator |  } 2025-04-10 00:45:03.210109 | orchestrator | } 2025-04-10 00:45:03.210612 | orchestrator | 2025-04-10 00:45:03.211928 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-10 00:45:03.212609 | orchestrator | 2025-04-10 00:45:03.213389 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-10 00:45:03.215374 | orchestrator | Thursday 10 April 2025 00:45:03 +0000 (0:00:00.746) 0:00:27.100 ******** 2025-04-10 00:45:03.845794 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-10 00:45:03.846180 | orchestrator | 2025-04-10 00:45:03.846231 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-10 00:45:03.846856 | orchestrator | Thursday 10 April 2025 00:45:03 +0000 (0:00:00.650) 0:00:27.751 ******** 2025-04-10 00:45:04.139367 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:04.140596 | orchestrator | 2025-04-10 00:45:04.140625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:04.141751 | orchestrator | Thursday 10 April 2025 00:45:04 +0000 (0:00:00.294) 0:00:28.045 ******** 2025-04-10 00:45:04.635478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-10 00:45:04.636052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-10 00:45:04.638281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-10 00:45:04.638673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-10 00:45:04.640052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-10 00:45:04.640996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-10 00:45:04.641584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-10 00:45:04.642381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-10 00:45:04.642894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-10 00:45:04.643513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-10 00:45:04.644025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-10 00:45:04.644365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-10 00:45:04.644908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-10 00:45:04.645251 | orchestrator | 2025-04-10 00:45:04.645973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:04.646259 | orchestrator | Thursday 10 April 2025 00:45:04 +0000 (0:00:00.494) 0:00:28.540 ******** 2025-04-10 00:45:04.849208 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:04.849415 | orchestrator | 2025-04-10 00:45:04.850133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:04.850387 | orchestrator | Thursday 10 April 2025 00:45:04 +0000 (0:00:00.215) 0:00:28.756 ******** 2025-04-10 00:45:05.057552 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:05.058643 | orchestrator | 2025-04-10 00:45:05.058679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:05.060732 | orchestrator | Thursday 10 April 2025 00:45:05 +0000 (0:00:00.206) 0:00:28.963 ******** 2025-04-10 00:45:05.269821 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:05.270255 | orchestrator | 2025-04-10 00:45:05.270891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:05.479564 | orchestrator | Thursday 10 April 2025 00:45:05 +0000 (0:00:00.213) 0:00:29.176 ******** 2025-04-10 00:45:05.479710 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:05.480718 | orchestrator | 2025-04-10 00:45:05.481281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:05.481999 | orchestrator | Thursday 10 April 2025 00:45:05 +0000 (0:00:00.209) 0:00:29.385 ******** 2025-04-10 00:45:05.689224 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:05.690183 | orchestrator | 2025-04-10 00:45:05.690967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:05.691227 | orchestrator | Thursday 10 April 2025 00:45:05 +0000 (0:00:00.210) 0:00:29.595 ******** 2025-04-10 00:45:05.886055 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:05.886483 | orchestrator | 2025-04-10 00:45:05.887339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:05.887508 | orchestrator | Thursday 10 April 2025 00:45:05 +0000 (0:00:00.196) 0:00:29.792 ******** 2025-04-10 00:45:06.076784 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:06.077075 | orchestrator | 2025-04-10 00:45:06.077496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:06.078447 | orchestrator | Thursday 10 April 2025 00:45:06 +0000 (0:00:00.191) 0:00:29.984 ******** 2025-04-10 00:45:06.548826 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:07.007024 | orchestrator | 2025-04-10 00:45:07.007156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:07.007178 | orchestrator | Thursday 10 April 2025 00:45:06 +0000 (0:00:00.469) 0:00:30.453 ******** 2025-04-10 00:45:07.007210 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a) 2025-04-10 00:45:07.007285 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a) 2025-04-10 00:45:07.007329 | orchestrator | 2025-04-10 00:45:07.007350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:07.007549 | orchestrator | Thursday 10 April 2025 00:45:07 +0000 (0:00:00.459) 0:00:30.913 ******** 2025-04-10 00:45:07.474491 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6) 2025-04-10 00:45:07.475134 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6) 2025-04-10 00:45:07.476097 | orchestrator | 2025-04-10 00:45:07.476903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:07.477798 | orchestrator | Thursday 10 April 2025 00:45:07 +0000 (0:00:00.468) 0:00:31.381 ******** 2025-04-10 00:45:07.936449 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238) 2025-04-10 00:45:07.937097 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238) 2025-04-10 00:45:07.938729 | orchestrator | 2025-04-10 00:45:07.939493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:07.939532 | orchestrator | Thursday 10 April 2025 00:45:07 +0000 (0:00:00.460) 0:00:31.842 ******** 2025-04-10 00:45:08.397421 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a) 2025-04-10 00:45:08.397791 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a) 2025-04-10 00:45:08.398076 | orchestrator | 2025-04-10 00:45:08.398609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:08.398822 | orchestrator | Thursday 10 April 2025 00:45:08 +0000 (0:00:00.462) 0:00:32.304 ******** 2025-04-10 00:45:08.766286 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-10 00:45:08.766453 | orchestrator | 2025-04-10 00:45:08.767189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:08.767389 | orchestrator | Thursday 10 April 2025 00:45:08 +0000 (0:00:00.366) 0:00:32.671 ******** 2025-04-10 00:45:09.270158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-10 00:45:09.271062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-10 00:45:09.271875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-10 00:45:09.272994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-10 00:45:09.276496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-10 00:45:09.277588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-10 00:45:09.277710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-10 00:45:09.277747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-10 00:45:09.278608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-10 00:45:09.278720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-10 00:45:09.279064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-10 00:45:09.279990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-10 00:45:09.280305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-10 00:45:09.281034 | orchestrator | 2025-04-10 00:45:09.281793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:09.282097 | orchestrator | Thursday 10 April 2025 00:45:09 +0000 (0:00:00.505) 0:00:33.176 ******** 2025-04-10 00:45:09.468163 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:09.468490 | orchestrator | 2025-04-10 00:45:09.469075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:09.469148 | orchestrator | Thursday 10 April 2025 00:45:09 +0000 (0:00:00.196) 0:00:33.373 ******** 2025-04-10 00:45:09.657165 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:09.658135 | orchestrator | 2025-04-10 00:45:09.659846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:09.661535 | orchestrator | Thursday 10 April 2025 00:45:09 +0000 (0:00:00.188) 0:00:33.562 ******** 2025-04-10 00:45:10.171634 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:10.172830 | orchestrator | 2025-04-10 00:45:10.172867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:10.173432 | orchestrator | Thursday 10 April 2025 00:45:10 +0000 (0:00:00.514) 0:00:34.077 ******** 2025-04-10 00:45:10.383015 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:10.383375 | orchestrator | 2025-04-10 00:45:10.384234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:10.385133 | orchestrator | Thursday 10 April 2025 00:45:10 +0000 (0:00:00.212) 0:00:34.289 ******** 2025-04-10 00:45:10.621037 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:10.621734 | orchestrator | 2025-04-10 00:45:10.622838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:10.625114 | orchestrator | Thursday 10 April 2025 00:45:10 +0000 (0:00:00.237) 0:00:34.526 ******** 2025-04-10 00:45:10.829768 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:10.830306 | orchestrator | 2025-04-10 00:45:10.833174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:11.041882 | orchestrator | Thursday 10 April 2025 00:45:10 +0000 (0:00:00.208) 0:00:34.734 ******** 2025-04-10 00:45:11.042107 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:11.042185 | orchestrator | 2025-04-10 00:45:11.042204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:11.042224 | orchestrator | Thursday 10 April 2025 00:45:11 +0000 (0:00:00.212) 0:00:34.947 ******** 2025-04-10 00:45:11.247183 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:11.247309 | orchestrator | 2025-04-10 00:45:11.247755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:11.248515 | orchestrator | Thursday 10 April 2025 00:45:11 +0000 (0:00:00.204) 0:00:35.152 ******** 2025-04-10 00:45:11.973301 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-10 00:45:11.973881 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-10 00:45:11.974185 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-10 00:45:11.974301 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-10 00:45:11.974704 | orchestrator | 2025-04-10 00:45:11.975067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:11.975854 | orchestrator | Thursday 10 April 2025 00:45:11 +0000 (0:00:00.727) 0:00:35.880 ******** 2025-04-10 00:45:12.176636 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:12.176889 | orchestrator | 2025-04-10 00:45:12.177174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:12.177749 | orchestrator | Thursday 10 April 2025 00:45:12 +0000 (0:00:00.203) 0:00:36.083 ******** 2025-04-10 00:45:12.389636 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:12.389847 | orchestrator | 2025-04-10 00:45:12.389863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:12.390525 | orchestrator | Thursday 10 April 2025 00:45:12 +0000 (0:00:00.213) 0:00:36.296 ******** 2025-04-10 00:45:12.582594 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:12.582858 | orchestrator | 2025-04-10 00:45:12.583156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:12.583693 | orchestrator | Thursday 10 April 2025 00:45:12 +0000 (0:00:00.193) 0:00:36.490 ******** 2025-04-10 00:45:13.322194 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:13.323199 | orchestrator | 2025-04-10 00:45:13.323384 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-10 00:45:13.323470 | orchestrator | Thursday 10 April 2025 00:45:13 +0000 (0:00:00.737) 0:00:37.227 ******** 2025-04-10 00:45:13.493317 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:13.494154 | orchestrator | 2025-04-10 00:45:13.494196 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-10 00:45:13.495021 | orchestrator | Thursday 10 April 2025 00:45:13 +0000 (0:00:00.173) 0:00:37.401 ******** 2025-04-10 00:45:13.715548 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e6570ad4-669c-53e9-93b8-24292f6b58fb'}}) 2025-04-10 00:45:13.716125 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '543b72d2-41b4-5023-b438-6662cb79109c'}}) 2025-04-10 00:45:13.716924 | orchestrator | 2025-04-10 00:45:13.719307 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-10 00:45:15.517480 | orchestrator | Thursday 10 April 2025 00:45:13 +0000 (0:00:00.220) 0:00:37.621 ******** 2025-04-10 00:45:15.517648 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'}) 2025-04-10 00:45:15.519417 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'}) 2025-04-10 00:45:15.519450 | orchestrator | 2025-04-10 00:45:15.520286 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-10 00:45:15.521463 | orchestrator | Thursday 10 April 2025 00:45:15 +0000 (0:00:01.798) 0:00:39.420 ******** 2025-04-10 00:45:15.697223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:15.697389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:15.698865 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:15.699619 | orchestrator | 2025-04-10 00:45:15.699647 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-10 00:45:15.699671 | orchestrator | Thursday 10 April 2025 00:45:15 +0000 (0:00:00.183) 0:00:39.604 ******** 2025-04-10 00:45:16.999104 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'}) 2025-04-10 00:45:16.999244 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'}) 2025-04-10 00:45:16.999264 | orchestrator | 2025-04-10 00:45:16.999888 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-10 00:45:17.000133 | orchestrator | Thursday 10 April 2025 00:45:16 +0000 (0:00:01.300) 0:00:40.904 ******** 2025-04-10 00:45:17.172371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:17.174067 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:17.176836 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:17.176887 | orchestrator | 2025-04-10 00:45:17.344894 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-10 00:45:17.345071 | orchestrator | Thursday 10 April 2025 00:45:17 +0000 (0:00:00.174) 0:00:41.078 ******** 2025-04-10 00:45:17.345107 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:17.346760 | orchestrator | 2025-04-10 00:45:17.347527 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-10 00:45:17.348744 | orchestrator | Thursday 10 April 2025 00:45:17 +0000 (0:00:00.171) 0:00:41.250 ******** 2025-04-10 00:45:17.510519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:17.510869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:17.511301 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:17.512369 | orchestrator | 2025-04-10 00:45:17.513693 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-10 00:45:17.514373 | orchestrator | Thursday 10 April 2025 00:45:17 +0000 (0:00:00.168) 0:00:41.418 ******** 2025-04-10 00:45:17.869429 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:17.869592 | orchestrator | 2025-04-10 00:45:17.869625 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-10 00:45:17.870111 | orchestrator | Thursday 10 April 2025 00:45:17 +0000 (0:00:00.355) 0:00:41.773 ******** 2025-04-10 00:45:18.038695 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:18.039609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:18.040246 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:18.040989 | orchestrator | 2025-04-10 00:45:18.041498 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-10 00:45:18.042133 | orchestrator | Thursday 10 April 2025 00:45:18 +0000 (0:00:00.172) 0:00:41.946 ******** 2025-04-10 00:45:18.182429 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:18.183573 | orchestrator | 2025-04-10 00:45:18.185140 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-10 00:45:18.186350 | orchestrator | Thursday 10 April 2025 00:45:18 +0000 (0:00:00.143) 0:00:42.090 ******** 2025-04-10 00:45:18.365225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:18.517579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:18.517685 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:18.517701 | orchestrator | 2025-04-10 00:45:18.517716 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-10 00:45:18.517731 | orchestrator | Thursday 10 April 2025 00:45:18 +0000 (0:00:00.177) 0:00:42.267 ******** 2025-04-10 00:45:18.517759 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:18.517886 | orchestrator | 2025-04-10 00:45:18.518917 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-10 00:45:18.519813 | orchestrator | Thursday 10 April 2025 00:45:18 +0000 (0:00:00.156) 0:00:42.424 ******** 2025-04-10 00:45:18.689327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:18.690208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:18.691142 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:18.691701 | orchestrator | 2025-04-10 00:45:18.692494 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-10 00:45:18.693214 | orchestrator | Thursday 10 April 2025 00:45:18 +0000 (0:00:00.172) 0:00:42.596 ******** 2025-04-10 00:45:18.877171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:18.877389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:18.878346 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:18.881454 | orchestrator | 2025-04-10 00:45:18.881891 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-10 00:45:18.881924 | orchestrator | Thursday 10 April 2025 00:45:18 +0000 (0:00:00.187) 0:00:42.784 ******** 2025-04-10 00:45:19.049194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:19.049548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:19.049986 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:19.050653 | orchestrator | 2025-04-10 00:45:19.051245 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-10 00:45:19.054361 | orchestrator | Thursday 10 April 2025 00:45:19 +0000 (0:00:00.171) 0:00:42.955 ******** 2025-04-10 00:45:19.186688 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:19.186865 | orchestrator | 2025-04-10 00:45:19.188145 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-10 00:45:19.188369 | orchestrator | Thursday 10 April 2025 00:45:19 +0000 (0:00:00.138) 0:00:43.094 ******** 2025-04-10 00:45:19.330460 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:19.331676 | orchestrator | 2025-04-10 00:45:19.331714 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-10 00:45:19.332843 | orchestrator | Thursday 10 April 2025 00:45:19 +0000 (0:00:00.142) 0:00:43.236 ******** 2025-04-10 00:45:19.482381 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:19.483399 | orchestrator | 2025-04-10 00:45:19.483455 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-10 00:45:19.483895 | orchestrator | Thursday 10 April 2025 00:45:19 +0000 (0:00:00.153) 0:00:43.389 ******** 2025-04-10 00:45:19.628597 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 00:45:19.629238 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-10 00:45:19.630253 | orchestrator | } 2025-04-10 00:45:19.630523 | orchestrator | 2025-04-10 00:45:19.631165 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-10 00:45:19.631773 | orchestrator | Thursday 10 April 2025 00:45:19 +0000 (0:00:00.145) 0:00:43.535 ******** 2025-04-10 00:45:20.009389 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 00:45:20.010212 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-10 00:45:20.011428 | orchestrator | } 2025-04-10 00:45:20.012615 | orchestrator | 2025-04-10 00:45:20.013601 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-10 00:45:20.014771 | orchestrator | Thursday 10 April 2025 00:45:20 +0000 (0:00:00.378) 0:00:43.914 ******** 2025-04-10 00:45:20.159457 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 00:45:20.161176 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-10 00:45:20.162592 | orchestrator | } 2025-04-10 00:45:20.163268 | orchestrator | 2025-04-10 00:45:20.164504 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-10 00:45:20.165066 | orchestrator | Thursday 10 April 2025 00:45:20 +0000 (0:00:00.150) 0:00:44.065 ******** 2025-04-10 00:45:20.662723 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:20.666735 | orchestrator | 2025-04-10 00:45:20.667070 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-10 00:45:20.667327 | orchestrator | Thursday 10 April 2025 00:45:20 +0000 (0:00:00.502) 0:00:44.568 ******** 2025-04-10 00:45:21.176316 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:21.177161 | orchestrator | 2025-04-10 00:45:21.179184 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-10 00:45:21.179377 | orchestrator | Thursday 10 April 2025 00:45:21 +0000 (0:00:00.513) 0:00:45.081 ******** 2025-04-10 00:45:21.714144 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:21.714367 | orchestrator | 2025-04-10 00:45:21.715845 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-10 00:45:21.718410 | orchestrator | Thursday 10 April 2025 00:45:21 +0000 (0:00:00.539) 0:00:45.620 ******** 2025-04-10 00:45:21.887102 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:21.887674 | orchestrator | 2025-04-10 00:45:21.888893 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-10 00:45:21.889262 | orchestrator | Thursday 10 April 2025 00:45:21 +0000 (0:00:00.170) 0:00:45.790 ******** 2025-04-10 00:45:22.026900 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:22.027803 | orchestrator | 2025-04-10 00:45:22.027877 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-10 00:45:22.028703 | orchestrator | Thursday 10 April 2025 00:45:22 +0000 (0:00:00.143) 0:00:45.934 ******** 2025-04-10 00:45:22.157602 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:22.159200 | orchestrator | 2025-04-10 00:45:22.159703 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-10 00:45:22.162406 | orchestrator | Thursday 10 April 2025 00:45:22 +0000 (0:00:00.129) 0:00:46.064 ******** 2025-04-10 00:45:22.324019 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 00:45:22.325054 | orchestrator |  "vgs_report": { 2025-04-10 00:45:22.326272 | orchestrator |  "vg": [] 2025-04-10 00:45:22.327353 | orchestrator |  } 2025-04-10 00:45:22.328539 | orchestrator | } 2025-04-10 00:45:22.329632 | orchestrator | 2025-04-10 00:45:22.330078 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-10 00:45:22.330780 | orchestrator | Thursday 10 April 2025 00:45:22 +0000 (0:00:00.164) 0:00:46.229 ******** 2025-04-10 00:45:22.466685 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:22.467979 | orchestrator | 2025-04-10 00:45:22.468925 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-10 00:45:22.469575 | orchestrator | Thursday 10 April 2025 00:45:22 +0000 (0:00:00.144) 0:00:46.373 ******** 2025-04-10 00:45:22.604198 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:22.604551 | orchestrator | 2025-04-10 00:45:22.605511 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-10 00:45:22.606408 | orchestrator | Thursday 10 April 2025 00:45:22 +0000 (0:00:00.137) 0:00:46.510 ******** 2025-04-10 00:45:22.957034 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:22.957268 | orchestrator | 2025-04-10 00:45:22.958204 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-10 00:45:22.958754 | orchestrator | Thursday 10 April 2025 00:45:22 +0000 (0:00:00.350) 0:00:46.861 ******** 2025-04-10 00:45:23.113172 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.113330 | orchestrator | 2025-04-10 00:45:23.113702 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-10 00:45:23.114189 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.159) 0:00:47.020 ******** 2025-04-10 00:45:23.243930 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.244345 | orchestrator | 2025-04-10 00:45:23.245233 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-10 00:45:23.246396 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.129) 0:00:47.150 ******** 2025-04-10 00:45:23.393272 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.395175 | orchestrator | 2025-04-10 00:45:23.395515 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-10 00:45:23.396424 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.149) 0:00:47.300 ******** 2025-04-10 00:45:23.544099 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.544706 | orchestrator | 2025-04-10 00:45:23.545147 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-10 00:45:23.545790 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.151) 0:00:47.451 ******** 2025-04-10 00:45:23.687248 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.688050 | orchestrator | 2025-04-10 00:45:23.688608 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-10 00:45:23.689089 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.143) 0:00:47.595 ******** 2025-04-10 00:45:23.852930 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.855113 | orchestrator | 2025-04-10 00:45:23.856602 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-10 00:45:23.856969 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.162) 0:00:47.757 ******** 2025-04-10 00:45:23.985394 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:23.986562 | orchestrator | 2025-04-10 00:45:23.987441 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-10 00:45:23.988644 | orchestrator | Thursday 10 April 2025 00:45:23 +0000 (0:00:00.133) 0:00:47.891 ******** 2025-04-10 00:45:24.146141 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:24.146744 | orchestrator | 2025-04-10 00:45:24.148020 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-10 00:45:24.148784 | orchestrator | Thursday 10 April 2025 00:45:24 +0000 (0:00:00.160) 0:00:48.051 ******** 2025-04-10 00:45:24.340259 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:24.340545 | orchestrator | 2025-04-10 00:45:24.341378 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-10 00:45:24.341879 | orchestrator | Thursday 10 April 2025 00:45:24 +0000 (0:00:00.193) 0:00:48.244 ******** 2025-04-10 00:45:24.492584 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:24.493481 | orchestrator | 2025-04-10 00:45:24.493898 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-10 00:45:24.494790 | orchestrator | Thursday 10 April 2025 00:45:24 +0000 (0:00:00.154) 0:00:48.399 ******** 2025-04-10 00:45:24.650233 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:24.650535 | orchestrator | 2025-04-10 00:45:24.651284 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-10 00:45:24.654376 | orchestrator | Thursday 10 April 2025 00:45:24 +0000 (0:00:00.157) 0:00:48.556 ******** 2025-04-10 00:45:25.299831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:25.301479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:25.302380 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:25.303144 | orchestrator | 2025-04-10 00:45:25.304315 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-10 00:45:25.305279 | orchestrator | Thursday 10 April 2025 00:45:25 +0000 (0:00:00.648) 0:00:49.204 ******** 2025-04-10 00:45:25.482461 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:25.483329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:25.483811 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:25.484654 | orchestrator | 2025-04-10 00:45:25.485598 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-10 00:45:25.486138 | orchestrator | Thursday 10 April 2025 00:45:25 +0000 (0:00:00.183) 0:00:49.387 ******** 2025-04-10 00:45:25.667137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:25.668041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:25.669529 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:25.670676 | orchestrator | 2025-04-10 00:45:25.670984 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-10 00:45:25.672150 | orchestrator | Thursday 10 April 2025 00:45:25 +0000 (0:00:00.184) 0:00:49.572 ******** 2025-04-10 00:45:25.843872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:25.844726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:25.847580 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:25.848335 | orchestrator | 2025-04-10 00:45:25.850156 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-10 00:45:25.850637 | orchestrator | Thursday 10 April 2025 00:45:25 +0000 (0:00:00.175) 0:00:49.747 ******** 2025-04-10 00:45:26.062785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:26.063883 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:26.064687 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:26.068316 | orchestrator | 2025-04-10 00:45:26.068451 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-10 00:45:26.068478 | orchestrator | Thursday 10 April 2025 00:45:26 +0000 (0:00:00.221) 0:00:49.969 ******** 2025-04-10 00:45:26.230705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:26.231821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:26.232258 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:26.232732 | orchestrator | 2025-04-10 00:45:26.233380 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-10 00:45:26.234208 | orchestrator | Thursday 10 April 2025 00:45:26 +0000 (0:00:00.168) 0:00:50.137 ******** 2025-04-10 00:45:26.404647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:26.405775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:26.405803 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:26.405816 | orchestrator | 2025-04-10 00:45:26.405833 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-10 00:45:26.581142 | orchestrator | Thursday 10 April 2025 00:45:26 +0000 (0:00:00.173) 0:00:50.311 ******** 2025-04-10 00:45:26.581272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:26.582789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:26.582822 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:26.582848 | orchestrator | 2025-04-10 00:45:26.583183 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-10 00:45:26.586182 | orchestrator | Thursday 10 April 2025 00:45:26 +0000 (0:00:00.176) 0:00:50.487 ******** 2025-04-10 00:45:27.143281 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:27.663804 | orchestrator | 2025-04-10 00:45:27.664039 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-10 00:45:27.664064 | orchestrator | Thursday 10 April 2025 00:45:27 +0000 (0:00:00.561) 0:00:51.049 ******** 2025-04-10 00:45:27.664099 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:27.664185 | orchestrator | 2025-04-10 00:45:27.664798 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-10 00:45:27.665623 | orchestrator | Thursday 10 April 2025 00:45:27 +0000 (0:00:00.521) 0:00:51.570 ******** 2025-04-10 00:45:28.054893 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:45:28.055501 | orchestrator | 2025-04-10 00:45:28.058171 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-10 00:45:28.267826 | orchestrator | Thursday 10 April 2025 00:45:28 +0000 (0:00:00.389) 0:00:51.960 ******** 2025-04-10 00:45:28.267933 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'vg_name': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'}) 2025-04-10 00:45:28.273779 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'vg_name': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'}) 2025-04-10 00:45:28.274161 | orchestrator | 2025-04-10 00:45:28.277339 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-10 00:45:28.278088 | orchestrator | Thursday 10 April 2025 00:45:28 +0000 (0:00:00.213) 0:00:52.173 ******** 2025-04-10 00:45:28.478110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:28.479213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:28.480662 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:28.480714 | orchestrator | 2025-04-10 00:45:28.481272 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-10 00:45:28.483713 | orchestrator | Thursday 10 April 2025 00:45:28 +0000 (0:00:00.211) 0:00:52.385 ******** 2025-04-10 00:45:28.659666 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:28.661110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:28.662648 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:28.663392 | orchestrator | 2025-04-10 00:45:28.664712 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-10 00:45:28.665719 | orchestrator | Thursday 10 April 2025 00:45:28 +0000 (0:00:00.180) 0:00:52.565 ******** 2025-04-10 00:45:28.824359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'})  2025-04-10 00:45:28.824732 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'})  2025-04-10 00:45:28.825197 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:45:28.826665 | orchestrator | 2025-04-10 00:45:28.827278 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-10 00:45:28.828969 | orchestrator | Thursday 10 April 2025 00:45:28 +0000 (0:00:00.164) 0:00:52.730 ******** 2025-04-10 00:45:29.759999 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 00:45:29.760636 | orchestrator |  "lvm_report": { 2025-04-10 00:45:29.763391 | orchestrator |  "lv": [ 2025-04-10 00:45:29.764206 | orchestrator |  { 2025-04-10 00:45:29.764252 | orchestrator |  "lv_name": "osd-block-543b72d2-41b4-5023-b438-6662cb79109c", 2025-04-10 00:45:29.764693 | orchestrator |  "vg_name": "ceph-543b72d2-41b4-5023-b438-6662cb79109c" 2025-04-10 00:45:29.765859 | orchestrator |  }, 2025-04-10 00:45:29.766263 | orchestrator |  { 2025-04-10 00:45:29.767196 | orchestrator |  "lv_name": "osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb", 2025-04-10 00:45:29.768070 | orchestrator |  "vg_name": "ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb" 2025-04-10 00:45:29.768788 | orchestrator |  } 2025-04-10 00:45:29.769707 | orchestrator |  ], 2025-04-10 00:45:29.770716 | orchestrator |  "pv": [ 2025-04-10 00:45:29.771340 | orchestrator |  { 2025-04-10 00:45:29.772203 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-10 00:45:29.773036 | orchestrator |  "vg_name": "ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb" 2025-04-10 00:45:29.773761 | orchestrator |  }, 2025-04-10 00:45:29.774407 | orchestrator |  { 2025-04-10 00:45:29.775424 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-10 00:45:29.775682 | orchestrator |  "vg_name": "ceph-543b72d2-41b4-5023-b438-6662cb79109c" 2025-04-10 00:45:29.776718 | orchestrator |  } 2025-04-10 00:45:29.777766 | orchestrator |  ] 2025-04-10 00:45:29.778595 | orchestrator |  } 2025-04-10 00:45:29.780109 | orchestrator | } 2025-04-10 00:45:29.781160 | orchestrator | 2025-04-10 00:45:29.782070 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-10 00:45:29.782607 | orchestrator | 2025-04-10 00:45:29.782973 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-10 00:45:29.783674 | orchestrator | Thursday 10 April 2025 00:45:29 +0000 (0:00:00.935) 0:00:53.665 ******** 2025-04-10 00:45:30.016630 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-10 00:45:30.017322 | orchestrator | 2025-04-10 00:45:30.017685 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-10 00:45:30.018132 | orchestrator | Thursday 10 April 2025 00:45:30 +0000 (0:00:00.257) 0:00:53.922 ******** 2025-04-10 00:45:30.254086 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:30.254261 | orchestrator | 2025-04-10 00:45:30.256129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:30.256294 | orchestrator | Thursday 10 April 2025 00:45:30 +0000 (0:00:00.237) 0:00:54.160 ******** 2025-04-10 00:45:30.738417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-10 00:45:30.739415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-10 00:45:30.740611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-10 00:45:30.743988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-10 00:45:30.744875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-10 00:45:30.744915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-10 00:45:30.744939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-10 00:45:30.745706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-10 00:45:30.746388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-10 00:45:30.746799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-10 00:45:30.747246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-10 00:45:30.748244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-10 00:45:30.749139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-10 00:45:30.749410 | orchestrator | 2025-04-10 00:45:30.750178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:30.750323 | orchestrator | Thursday 10 April 2025 00:45:30 +0000 (0:00:00.484) 0:00:54.645 ******** 2025-04-10 00:45:30.947908 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:30.949224 | orchestrator | 2025-04-10 00:45:30.949595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:30.950695 | orchestrator | Thursday 10 April 2025 00:45:30 +0000 (0:00:00.209) 0:00:54.854 ******** 2025-04-10 00:45:31.157818 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:31.158337 | orchestrator | 2025-04-10 00:45:31.158380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:31.158743 | orchestrator | Thursday 10 April 2025 00:45:31 +0000 (0:00:00.209) 0:00:55.064 ******** 2025-04-10 00:45:31.358088 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:31.358836 | orchestrator | 2025-04-10 00:45:31.360679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:31.360788 | orchestrator | Thursday 10 April 2025 00:45:31 +0000 (0:00:00.199) 0:00:55.264 ******** 2025-04-10 00:45:31.572376 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:31.572905 | orchestrator | 2025-04-10 00:45:31.573491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:31.574240 | orchestrator | Thursday 10 April 2025 00:45:31 +0000 (0:00:00.212) 0:00:55.477 ******** 2025-04-10 00:45:31.796202 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:31.797153 | orchestrator | 2025-04-10 00:45:31.798170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:31.799258 | orchestrator | Thursday 10 April 2025 00:45:31 +0000 (0:00:00.225) 0:00:55.702 ******** 2025-04-10 00:45:32.416415 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:32.416590 | orchestrator | 2025-04-10 00:45:32.416911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:32.417330 | orchestrator | Thursday 10 April 2025 00:45:32 +0000 (0:00:00.620) 0:00:56.323 ******** 2025-04-10 00:45:32.641342 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:32.641582 | orchestrator | 2025-04-10 00:45:32.641879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:32.642660 | orchestrator | Thursday 10 April 2025 00:45:32 +0000 (0:00:00.226) 0:00:56.549 ******** 2025-04-10 00:45:32.894129 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:32.894299 | orchestrator | 2025-04-10 00:45:32.897076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:32.897848 | orchestrator | Thursday 10 April 2025 00:45:32 +0000 (0:00:00.250) 0:00:56.799 ******** 2025-04-10 00:45:33.346425 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817) 2025-04-10 00:45:33.346589 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817) 2025-04-10 00:45:33.349084 | orchestrator | 2025-04-10 00:45:33.815260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:33.815386 | orchestrator | Thursday 10 April 2025 00:45:33 +0000 (0:00:00.451) 0:00:57.251 ******** 2025-04-10 00:45:33.815422 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1) 2025-04-10 00:45:33.816187 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1) 2025-04-10 00:45:33.816687 | orchestrator | 2025-04-10 00:45:33.817516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:33.818316 | orchestrator | Thursday 10 April 2025 00:45:33 +0000 (0:00:00.468) 0:00:57.719 ******** 2025-04-10 00:45:34.264390 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644) 2025-04-10 00:45:34.264767 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644) 2025-04-10 00:45:34.266344 | orchestrator | 2025-04-10 00:45:34.266721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:34.267375 | orchestrator | Thursday 10 April 2025 00:45:34 +0000 (0:00:00.450) 0:00:58.170 ******** 2025-04-10 00:45:34.742588 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172) 2025-04-10 00:45:34.742767 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172) 2025-04-10 00:45:34.742791 | orchestrator | 2025-04-10 00:45:34.742807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-10 00:45:34.742828 | orchestrator | Thursday 10 April 2025 00:45:34 +0000 (0:00:00.477) 0:00:58.647 ******** 2025-04-10 00:45:35.102416 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-10 00:45:35.105829 | orchestrator | 2025-04-10 00:45:35.106485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:35.106533 | orchestrator | Thursday 10 April 2025 00:45:35 +0000 (0:00:00.359) 0:00:59.007 ******** 2025-04-10 00:45:35.623296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-10 00:45:35.630716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-10 00:45:35.631507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-10 00:45:35.632682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-10 00:45:35.633241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-10 00:45:35.633897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-10 00:45:35.635181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-10 00:45:35.635266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-10 00:45:35.635834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-10 00:45:35.636177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-10 00:45:35.636502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-10 00:45:35.637077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-10 00:45:35.637401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-10 00:45:35.637700 | orchestrator | 2025-04-10 00:45:35.638090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:36.234317 | orchestrator | Thursday 10 April 2025 00:45:35 +0000 (0:00:00.522) 0:00:59.529 ******** 2025-04-10 00:45:36.234423 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:36.438637 | orchestrator | 2025-04-10 00:45:36.438750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:36.438768 | orchestrator | Thursday 10 April 2025 00:45:36 +0000 (0:00:00.610) 0:01:00.140 ******** 2025-04-10 00:45:36.438799 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:36.440195 | orchestrator | 2025-04-10 00:45:36.440915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:36.444507 | orchestrator | Thursday 10 April 2025 00:45:36 +0000 (0:00:00.204) 0:01:00.344 ******** 2025-04-10 00:45:36.645747 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:36.646318 | orchestrator | 2025-04-10 00:45:36.647551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:36.648386 | orchestrator | Thursday 10 April 2025 00:45:36 +0000 (0:00:00.206) 0:01:00.551 ******** 2025-04-10 00:45:36.868076 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:36.868267 | orchestrator | 2025-04-10 00:45:36.868960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:36.869354 | orchestrator | Thursday 10 April 2025 00:45:36 +0000 (0:00:00.224) 0:01:00.776 ******** 2025-04-10 00:45:37.081736 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:37.082086 | orchestrator | 2025-04-10 00:45:37.082584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:37.084387 | orchestrator | Thursday 10 April 2025 00:45:37 +0000 (0:00:00.209) 0:01:00.986 ******** 2025-04-10 00:45:37.290915 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:37.291355 | orchestrator | 2025-04-10 00:45:37.292069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:37.292730 | orchestrator | Thursday 10 April 2025 00:45:37 +0000 (0:00:00.211) 0:01:01.197 ******** 2025-04-10 00:45:37.539495 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:37.539696 | orchestrator | 2025-04-10 00:45:37.540716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:37.542291 | orchestrator | Thursday 10 April 2025 00:45:37 +0000 (0:00:00.246) 0:01:01.444 ******** 2025-04-10 00:45:37.760126 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:37.760488 | orchestrator | 2025-04-10 00:45:37.761527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:37.761629 | orchestrator | Thursday 10 April 2025 00:45:37 +0000 (0:00:00.222) 0:01:01.667 ******** 2025-04-10 00:45:38.663805 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-10 00:45:38.664015 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-10 00:45:38.664040 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-10 00:45:38.664059 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-10 00:45:38.664289 | orchestrator | 2025-04-10 00:45:38.664319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:38.665059 | orchestrator | Thursday 10 April 2025 00:45:38 +0000 (0:00:00.901) 0:01:02.568 ******** 2025-04-10 00:45:38.879286 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:38.879438 | orchestrator | 2025-04-10 00:45:38.879468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:38.879661 | orchestrator | Thursday 10 April 2025 00:45:38 +0000 (0:00:00.218) 0:01:02.786 ******** 2025-04-10 00:45:39.623291 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:39.623741 | orchestrator | 2025-04-10 00:45:39.626645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:39.627039 | orchestrator | Thursday 10 April 2025 00:45:39 +0000 (0:00:00.742) 0:01:03.529 ******** 2025-04-10 00:45:39.844563 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:39.844973 | orchestrator | 2025-04-10 00:45:39.845016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-10 00:45:39.845266 | orchestrator | Thursday 10 April 2025 00:45:39 +0000 (0:00:00.222) 0:01:03.751 ******** 2025-04-10 00:45:40.059555 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:40.060098 | orchestrator | 2025-04-10 00:45:40.061073 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-10 00:45:40.061428 | orchestrator | Thursday 10 April 2025 00:45:40 +0000 (0:00:00.215) 0:01:03.966 ******** 2025-04-10 00:45:40.206356 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:40.208803 | orchestrator | 2025-04-10 00:45:40.210110 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-10 00:45:40.210526 | orchestrator | Thursday 10 April 2025 00:45:40 +0000 (0:00:00.144) 0:01:04.111 ******** 2025-04-10 00:45:40.418919 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47ce51ce-522f-5092-939d-97f529b04c78'}}) 2025-04-10 00:45:40.419607 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1024c186-728b-5ddc-b380-e3967fe3a792'}}) 2025-04-10 00:45:40.420560 | orchestrator | 2025-04-10 00:45:40.421140 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-10 00:45:40.421773 | orchestrator | Thursday 10 April 2025 00:45:40 +0000 (0:00:00.213) 0:01:04.325 ******** 2025-04-10 00:45:42.284415 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'}) 2025-04-10 00:45:42.284597 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'}) 2025-04-10 00:45:42.284924 | orchestrator | 2025-04-10 00:45:42.285754 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-10 00:45:42.286200 | orchestrator | Thursday 10 April 2025 00:45:42 +0000 (0:00:01.864) 0:01:06.189 ******** 2025-04-10 00:45:42.468977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:42.469752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:42.470613 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:42.471035 | orchestrator | 2025-04-10 00:45:42.472709 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-10 00:45:42.473489 | orchestrator | Thursday 10 April 2025 00:45:42 +0000 (0:00:00.184) 0:01:06.374 ******** 2025-04-10 00:45:43.824232 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'}) 2025-04-10 00:45:43.825158 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'}) 2025-04-10 00:45:43.825355 | orchestrator | 2025-04-10 00:45:43.826169 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-10 00:45:43.826390 | orchestrator | Thursday 10 April 2025 00:45:43 +0000 (0:00:01.356) 0:01:07.730 ******** 2025-04-10 00:45:44.002345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:44.004192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:44.005162 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:44.006672 | orchestrator | 2025-04-10 00:45:44.007851 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-10 00:45:44.008480 | orchestrator | Thursday 10 April 2025 00:45:43 +0000 (0:00:00.179) 0:01:07.909 ******** 2025-04-10 00:45:44.345039 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:44.345251 | orchestrator | 2025-04-10 00:45:44.346116 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-10 00:45:44.346699 | orchestrator | Thursday 10 April 2025 00:45:44 +0000 (0:00:00.342) 0:01:08.252 ******** 2025-04-10 00:45:44.535580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:44.536570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:44.537234 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:44.538066 | orchestrator | 2025-04-10 00:45:44.538920 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-10 00:45:44.540937 | orchestrator | Thursday 10 April 2025 00:45:44 +0000 (0:00:00.189) 0:01:08.442 ******** 2025-04-10 00:45:44.685708 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:44.688282 | orchestrator | 2025-04-10 00:45:44.688576 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-10 00:45:44.689766 | orchestrator | Thursday 10 April 2025 00:45:44 +0000 (0:00:00.150) 0:01:08.592 ******** 2025-04-10 00:45:44.894123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:44.894709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:44.895399 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:44.896172 | orchestrator | 2025-04-10 00:45:44.897090 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-10 00:45:44.897653 | orchestrator | Thursday 10 April 2025 00:45:44 +0000 (0:00:00.208) 0:01:08.801 ******** 2025-04-10 00:45:45.042147 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:45.042521 | orchestrator | 2025-04-10 00:45:45.043214 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-10 00:45:45.044046 | orchestrator | Thursday 10 April 2025 00:45:45 +0000 (0:00:00.148) 0:01:08.949 ******** 2025-04-10 00:45:45.208483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:45.209759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:45.211906 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:45.212495 | orchestrator | 2025-04-10 00:45:45.213013 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-10 00:45:45.213492 | orchestrator | Thursday 10 April 2025 00:45:45 +0000 (0:00:00.165) 0:01:09.114 ******** 2025-04-10 00:45:45.353296 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:45.353480 | orchestrator | 2025-04-10 00:45:45.354112 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-10 00:45:45.355060 | orchestrator | Thursday 10 April 2025 00:45:45 +0000 (0:00:00.141) 0:01:09.256 ******** 2025-04-10 00:45:45.526869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:45.527108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:45.527712 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:45.529811 | orchestrator | 2025-04-10 00:45:45.530323 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-10 00:45:45.531035 | orchestrator | Thursday 10 April 2025 00:45:45 +0000 (0:00:00.177) 0:01:09.433 ******** 2025-04-10 00:45:45.716091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:45.716553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:45.717139 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:45.718201 | orchestrator | 2025-04-10 00:45:45.719319 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-10 00:45:45.719787 | orchestrator | Thursday 10 April 2025 00:45:45 +0000 (0:00:00.189) 0:01:09.622 ******** 2025-04-10 00:45:45.893806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:45.894125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:45.895144 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:45.896259 | orchestrator | 2025-04-10 00:45:45.897008 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-10 00:45:45.897844 | orchestrator | Thursday 10 April 2025 00:45:45 +0000 (0:00:00.176) 0:01:09.799 ******** 2025-04-10 00:45:46.042488 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:46.043437 | orchestrator | 2025-04-10 00:45:46.044482 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-10 00:45:46.045649 | orchestrator | Thursday 10 April 2025 00:45:46 +0000 (0:00:00.148) 0:01:09.948 ******** 2025-04-10 00:45:46.416544 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:46.416771 | orchestrator | 2025-04-10 00:45:46.416801 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-10 00:45:46.416825 | orchestrator | Thursday 10 April 2025 00:45:46 +0000 (0:00:00.374) 0:01:10.322 ******** 2025-04-10 00:45:46.558268 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:46.558482 | orchestrator | 2025-04-10 00:45:46.558515 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-10 00:45:46.560438 | orchestrator | Thursday 10 April 2025 00:45:46 +0000 (0:00:00.142) 0:01:10.464 ******** 2025-04-10 00:45:46.723938 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 00:45:46.724154 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-10 00:45:46.724199 | orchestrator | } 2025-04-10 00:45:46.724564 | orchestrator | 2025-04-10 00:45:46.725081 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-10 00:45:46.725177 | orchestrator | Thursday 10 April 2025 00:45:46 +0000 (0:00:00.166) 0:01:10.630 ******** 2025-04-10 00:45:46.875480 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 00:45:46.875685 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-10 00:45:46.876549 | orchestrator | } 2025-04-10 00:45:46.876873 | orchestrator | 2025-04-10 00:45:46.877599 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-10 00:45:46.878154 | orchestrator | Thursday 10 April 2025 00:45:46 +0000 (0:00:00.151) 0:01:10.782 ******** 2025-04-10 00:45:47.016907 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 00:45:47.019646 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-10 00:45:47.021500 | orchestrator | } 2025-04-10 00:45:47.023385 | orchestrator | 2025-04-10 00:45:47.024019 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-10 00:45:47.025252 | orchestrator | Thursday 10 April 2025 00:45:47 +0000 (0:00:00.141) 0:01:10.923 ******** 2025-04-10 00:45:47.575191 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:47.575449 | orchestrator | 2025-04-10 00:45:47.578991 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-10 00:45:47.580146 | orchestrator | Thursday 10 April 2025 00:45:47 +0000 (0:00:00.556) 0:01:11.480 ******** 2025-04-10 00:45:48.147029 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:48.147521 | orchestrator | 2025-04-10 00:45:48.147558 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-10 00:45:48.147590 | orchestrator | Thursday 10 April 2025 00:45:48 +0000 (0:00:00.574) 0:01:12.054 ******** 2025-04-10 00:45:48.716434 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:48.716603 | orchestrator | 2025-04-10 00:45:48.716633 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-10 00:45:48.717048 | orchestrator | Thursday 10 April 2025 00:45:48 +0000 (0:00:00.566) 0:01:12.620 ******** 2025-04-10 00:45:48.898349 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:48.901240 | orchestrator | 2025-04-10 00:45:48.903440 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-10 00:45:49.021112 | orchestrator | Thursday 10 April 2025 00:45:48 +0000 (0:00:00.184) 0:01:12.805 ******** 2025-04-10 00:45:49.021252 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:49.022457 | orchestrator | 2025-04-10 00:45:49.023407 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-10 00:45:49.025756 | orchestrator | Thursday 10 April 2025 00:45:49 +0000 (0:00:00.122) 0:01:12.927 ******** 2025-04-10 00:45:49.145357 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:49.146161 | orchestrator | 2025-04-10 00:45:49.146199 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-10 00:45:49.147005 | orchestrator | Thursday 10 April 2025 00:45:49 +0000 (0:00:00.124) 0:01:13.051 ******** 2025-04-10 00:45:49.551390 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 00:45:49.555575 | orchestrator |  "vgs_report": { 2025-04-10 00:45:49.556022 | orchestrator |  "vg": [] 2025-04-10 00:45:49.556065 | orchestrator |  } 2025-04-10 00:45:49.557257 | orchestrator | } 2025-04-10 00:45:49.558062 | orchestrator | 2025-04-10 00:45:49.559099 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-10 00:45:49.560042 | orchestrator | Thursday 10 April 2025 00:45:49 +0000 (0:00:00.405) 0:01:13.456 ******** 2025-04-10 00:45:49.718144 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:49.718276 | orchestrator | 2025-04-10 00:45:49.719140 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-10 00:45:49.720590 | orchestrator | Thursday 10 April 2025 00:45:49 +0000 (0:00:00.166) 0:01:13.623 ******** 2025-04-10 00:45:49.871597 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:49.872466 | orchestrator | 2025-04-10 00:45:49.875239 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-10 00:45:49.876651 | orchestrator | Thursday 10 April 2025 00:45:49 +0000 (0:00:00.153) 0:01:13.777 ******** 2025-04-10 00:45:50.010861 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.011747 | orchestrator | 2025-04-10 00:45:50.013039 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-10 00:45:50.013659 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.140) 0:01:13.917 ******** 2025-04-10 00:45:50.171411 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.172315 | orchestrator | 2025-04-10 00:45:50.175806 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-10 00:45:50.327810 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.160) 0:01:14.078 ******** 2025-04-10 00:45:50.327934 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.329102 | orchestrator | 2025-04-10 00:45:50.330895 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-10 00:45:50.331798 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.156) 0:01:14.234 ******** 2025-04-10 00:45:50.474459 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.475340 | orchestrator | 2025-04-10 00:45:50.476036 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-10 00:45:50.477702 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.143) 0:01:14.378 ******** 2025-04-10 00:45:50.622379 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.623132 | orchestrator | 2025-04-10 00:45:50.623351 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-10 00:45:50.624401 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.150) 0:01:14.529 ******** 2025-04-10 00:45:50.783522 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.785724 | orchestrator | 2025-04-10 00:45:50.787176 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-10 00:45:50.924123 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.160) 0:01:14.689 ******** 2025-04-10 00:45:50.924223 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:50.925747 | orchestrator | 2025-04-10 00:45:50.926290 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-10 00:45:50.926933 | orchestrator | Thursday 10 April 2025 00:45:50 +0000 (0:00:00.141) 0:01:14.831 ******** 2025-04-10 00:45:51.076301 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:51.076822 | orchestrator | 2025-04-10 00:45:51.077693 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-10 00:45:51.077928 | orchestrator | Thursday 10 April 2025 00:45:51 +0000 (0:00:00.152) 0:01:14.983 ******** 2025-04-10 00:45:51.202262 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:51.202515 | orchestrator | 2025-04-10 00:45:51.202980 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-10 00:45:51.203506 | orchestrator | Thursday 10 April 2025 00:45:51 +0000 (0:00:00.125) 0:01:15.108 ******** 2025-04-10 00:45:51.549180 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:51.550187 | orchestrator | 2025-04-10 00:45:51.551024 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-10 00:45:51.551696 | orchestrator | Thursday 10 April 2025 00:45:51 +0000 (0:00:00.345) 0:01:15.454 ******** 2025-04-10 00:45:51.694323 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:51.694518 | orchestrator | 2025-04-10 00:45:51.695159 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-10 00:45:51.842294 | orchestrator | Thursday 10 April 2025 00:45:51 +0000 (0:00:00.147) 0:01:15.601 ******** 2025-04-10 00:45:51.842534 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:51.842731 | orchestrator | 2025-04-10 00:45:51.842831 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-10 00:45:51.843551 | orchestrator | Thursday 10 April 2025 00:45:51 +0000 (0:00:00.147) 0:01:15.749 ******** 2025-04-10 00:45:52.032147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:52.032317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:52.032659 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:52.034373 | orchestrator | 2025-04-10 00:45:52.035302 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-10 00:45:52.036065 | orchestrator | Thursday 10 April 2025 00:45:52 +0000 (0:00:00.188) 0:01:15.937 ******** 2025-04-10 00:45:52.212420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:52.390157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:52.390236 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:52.390253 | orchestrator | 2025-04-10 00:45:52.390270 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-10 00:45:52.390286 | orchestrator | Thursday 10 April 2025 00:45:52 +0000 (0:00:00.180) 0:01:16.117 ******** 2025-04-10 00:45:52.390333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:52.390416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:52.390439 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:52.392525 | orchestrator | 2025-04-10 00:45:52.393033 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-10 00:45:52.393155 | orchestrator | Thursday 10 April 2025 00:45:52 +0000 (0:00:00.176) 0:01:16.294 ******** 2025-04-10 00:45:52.571693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:52.571890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:52.572833 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:52.572899 | orchestrator | 2025-04-10 00:45:52.573244 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-10 00:45:52.573724 | orchestrator | Thursday 10 April 2025 00:45:52 +0000 (0:00:00.179) 0:01:16.473 ******** 2025-04-10 00:45:52.746600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:52.747590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:52.747899 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:52.747930 | orchestrator | 2025-04-10 00:45:52.747980 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-10 00:45:52.748997 | orchestrator | Thursday 10 April 2025 00:45:52 +0000 (0:00:00.180) 0:01:16.654 ******** 2025-04-10 00:45:52.950276 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:52.951581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:52.952269 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:52.952314 | orchestrator | 2025-04-10 00:45:52.952930 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-10 00:45:53.141797 | orchestrator | Thursday 10 April 2025 00:45:52 +0000 (0:00:00.201) 0:01:16.856 ******** 2025-04-10 00:45:53.141927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:53.143299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:53.144202 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:53.145351 | orchestrator | 2025-04-10 00:45:53.146221 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-10 00:45:53.147128 | orchestrator | Thursday 10 April 2025 00:45:53 +0000 (0:00:00.192) 0:01:17.048 ******** 2025-04-10 00:45:53.319658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:53.321111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:53.322125 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:53.323095 | orchestrator | 2025-04-10 00:45:53.324619 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-10 00:45:53.325334 | orchestrator | Thursday 10 April 2025 00:45:53 +0000 (0:00:00.177) 0:01:17.225 ******** 2025-04-10 00:45:54.087926 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:54.088184 | orchestrator | 2025-04-10 00:45:54.089056 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-10 00:45:54.089222 | orchestrator | Thursday 10 April 2025 00:45:54 +0000 (0:00:00.764) 0:01:17.989 ******** 2025-04-10 00:45:54.644473 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:54.645415 | orchestrator | 2025-04-10 00:45:54.647683 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-10 00:45:54.804286 | orchestrator | Thursday 10 April 2025 00:45:54 +0000 (0:00:00.560) 0:01:18.550 ******** 2025-04-10 00:45:54.804432 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:45:54.805153 | orchestrator | 2025-04-10 00:45:54.805975 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-10 00:45:54.808693 | orchestrator | Thursday 10 April 2025 00:45:54 +0000 (0:00:00.159) 0:01:18.709 ******** 2025-04-10 00:45:55.013364 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'vg_name': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'}) 2025-04-10 00:45:55.013678 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'vg_name': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'}) 2025-04-10 00:45:55.015144 | orchestrator | 2025-04-10 00:45:55.015923 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-10 00:45:55.016300 | orchestrator | Thursday 10 April 2025 00:45:55 +0000 (0:00:00.210) 0:01:18.920 ******** 2025-04-10 00:45:55.214331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:55.215082 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:55.216185 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:55.217665 | orchestrator | 2025-04-10 00:45:55.218520 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-10 00:45:55.219317 | orchestrator | Thursday 10 April 2025 00:45:55 +0000 (0:00:00.200) 0:01:19.120 ******** 2025-04-10 00:45:55.398489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:55.399222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:55.400284 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:55.400691 | orchestrator | 2025-04-10 00:45:55.401930 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-10 00:45:55.402988 | orchestrator | Thursday 10 April 2025 00:45:55 +0000 (0:00:00.184) 0:01:19.304 ******** 2025-04-10 00:45:55.596533 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'})  2025-04-10 00:45:55.597643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'})  2025-04-10 00:45:55.598717 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:45:55.600127 | orchestrator | 2025-04-10 00:45:55.601371 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-10 00:45:55.602255 | orchestrator | Thursday 10 April 2025 00:45:55 +0000 (0:00:00.196) 0:01:19.501 ******** 2025-04-10 00:45:56.243063 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 00:45:56.243243 | orchestrator |  "lvm_report": { 2025-04-10 00:45:56.244433 | orchestrator |  "lv": [ 2025-04-10 00:45:56.245158 | orchestrator |  { 2025-04-10 00:45:56.245545 | orchestrator |  "lv_name": "osd-block-1024c186-728b-5ddc-b380-e3967fe3a792", 2025-04-10 00:45:56.246274 | orchestrator |  "vg_name": "ceph-1024c186-728b-5ddc-b380-e3967fe3a792" 2025-04-10 00:45:56.246902 | orchestrator |  }, 2025-04-10 00:45:56.248007 | orchestrator |  { 2025-04-10 00:45:56.249712 | orchestrator |  "lv_name": "osd-block-47ce51ce-522f-5092-939d-97f529b04c78", 2025-04-10 00:45:56.251341 | orchestrator |  "vg_name": "ceph-47ce51ce-522f-5092-939d-97f529b04c78" 2025-04-10 00:45:56.252545 | orchestrator |  } 2025-04-10 00:45:56.253524 | orchestrator |  ], 2025-04-10 00:45:56.253809 | orchestrator |  "pv": [ 2025-04-10 00:45:56.255342 | orchestrator |  { 2025-04-10 00:45:56.256315 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-10 00:45:56.256986 | orchestrator |  "vg_name": "ceph-47ce51ce-522f-5092-939d-97f529b04c78" 2025-04-10 00:45:56.257327 | orchestrator |  }, 2025-04-10 00:45:56.258287 | orchestrator |  { 2025-04-10 00:45:56.259081 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-10 00:45:56.259607 | orchestrator |  "vg_name": "ceph-1024c186-728b-5ddc-b380-e3967fe3a792" 2025-04-10 00:45:56.259656 | orchestrator |  } 2025-04-10 00:45:56.260125 | orchestrator |  ] 2025-04-10 00:45:56.260668 | orchestrator |  } 2025-04-10 00:45:56.260875 | orchestrator | } 2025-04-10 00:45:56.261706 | orchestrator | 2025-04-10 00:45:56.262119 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:45:56.262667 | orchestrator | 2025-04-10 00:45:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:45:56.262989 | orchestrator | 2025-04-10 00:45:56 | INFO  | Please wait and do not abort execution. 2025-04-10 00:45:56.263772 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-10 00:45:56.264623 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-10 00:45:56.264689 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-10 00:45:56.265050 | orchestrator | 2025-04-10 00:45:56.265284 | orchestrator | 2025-04-10 00:45:56.265579 | orchestrator | 2025-04-10 00:45:56.266237 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:45:56.266452 | orchestrator | Thursday 10 April 2025 00:45:56 +0000 (0:00:00.649) 0:01:20.150 ******** 2025-04-10 00:45:56.266673 | orchestrator | =============================================================================== 2025-04-10 00:45:56.267053 | orchestrator | Create block VGs -------------------------------------------------------- 5.92s 2025-04-10 00:45:56.267325 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2025-04-10 00:45:56.267777 | orchestrator | Print LVM report data --------------------------------------------------- 2.33s 2025-04-10 00:45:56.267919 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.23s 2025-04-10 00:45:56.268916 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.85s 2025-04-10 00:45:56.270070 | orchestrator | Add known links to the list of available block devices ------------------ 1.76s 2025-04-10 00:45:56.270168 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2025-04-10 00:45:56.270303 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2025-04-10 00:45:56.271031 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-04-10 00:45:56.271691 | orchestrator | Add known partitions to the list of available block devices ------------- 1.52s 2025-04-10 00:45:56.272458 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.14s 2025-04-10 00:45:56.272915 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 1.02s 2025-04-10 00:45:56.273483 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-04-10 00:45:56.274534 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-04-10 00:45:56.274985 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.80s 2025-04-10 00:45:56.275037 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.76s 2025-04-10 00:45:56.275116 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2025-04-10 00:45:56.275884 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-04-10 00:45:56.276475 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-04-10 00:45:56.277856 | orchestrator | Print LVM VGs report data ----------------------------------------------- 0.73s 2025-04-10 00:45:58.386335 | orchestrator | 2025-04-10 00:45:58 | INFO  | Task e72df104-5b10-43be-8af4-99b0e3181833 (facts) was prepared for execution. 2025-04-10 00:46:01.703760 | orchestrator | 2025-04-10 00:45:58 | INFO  | It takes a moment until task e72df104-5b10-43be-8af4-99b0e3181833 (facts) has been started and output is visible here. 2025-04-10 00:46:01.703899 | orchestrator | 2025-04-10 00:46:01.704500 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-10 00:46:01.704541 | orchestrator | 2025-04-10 00:46:01.705587 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-10 00:46:01.706383 | orchestrator | Thursday 10 April 2025 00:46:01 +0000 (0:00:00.219) 0:00:00.219 ******** 2025-04-10 00:46:02.827771 | orchestrator | ok: [testbed-manager] 2025-04-10 00:46:02.827900 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:46:02.827921 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:46:02.828391 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:46:02.829115 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:46:02.829687 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:46:02.829751 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:46:02.830164 | orchestrator | 2025-04-10 00:46:02.833808 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-10 00:46:03.004647 | orchestrator | Thursday 10 April 2025 00:46:02 +0000 (0:00:01.121) 0:00:01.341 ******** 2025-04-10 00:46:03.004783 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:46:03.096123 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:46:03.178328 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:46:03.259471 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:46:03.338093 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:46:04.142345 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:46:04.142795 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:46:04.142840 | orchestrator | 2025-04-10 00:46:04.143799 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-10 00:46:04.144319 | orchestrator | 2025-04-10 00:46:04.147550 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-10 00:46:09.762557 | orchestrator | Thursday 10 April 2025 00:46:04 +0000 (0:00:01.319) 0:00:02.660 ******** 2025-04-10 00:46:09.762701 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:46:09.763027 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:46:09.764419 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:46:09.766726 | orchestrator | ok: [testbed-manager] 2025-04-10 00:46:09.767408 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:46:09.768134 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:46:09.769363 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:46:09.769806 | orchestrator | 2025-04-10 00:46:09.770780 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-10 00:46:09.771068 | orchestrator | 2025-04-10 00:46:09.771811 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-10 00:46:09.772471 | orchestrator | Thursday 10 April 2025 00:46:09 +0000 (0:00:05.619) 0:00:08.280 ******** 2025-04-10 00:46:10.108641 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:46:10.197424 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:46:10.274877 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:46:10.354178 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:46:10.435391 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:46:10.485170 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:46:10.486017 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:46:10.486100 | orchestrator | 2025-04-10 00:46:10.486612 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:46:10.487420 | orchestrator | 2025-04-10 00:46:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-10 00:46:10.488030 | orchestrator | 2025-04-10 00:46:10 | INFO  | Please wait and do not abort execution. 2025-04-10 00:46:10.488080 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.488847 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.489221 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.489646 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.490801 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.491108 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.491139 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:46:10.491634 | orchestrator | 2025-04-10 00:46:10.492090 | orchestrator | Thursday 10 April 2025 00:46:10 +0000 (0:00:00.722) 0:00:09.003 ******** 2025-04-10 00:46:10.492656 | orchestrator | =============================================================================== 2025-04-10 00:46:10.493588 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.62s 2025-04-10 00:46:10.494158 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-04-10 00:46:10.495036 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2025-04-10 00:46:10.495654 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-04-10 00:46:11.147989 | orchestrator | 2025-04-10 00:46:11.150868 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Apr 10 00:46:11 UTC 2025 2025-04-10 00:46:12.637362 | orchestrator | 2025-04-10 00:46:12.637497 | orchestrator | 2025-04-10 00:46:12 | INFO  | Collection nutshell is prepared for execution 2025-04-10 00:46:12.641894 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [0] - dotfiles 2025-04-10 00:46:12.641967 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [0] - homer 2025-04-10 00:46:12.643430 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [0] - netdata 2025-04-10 00:46:12.643459 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [0] - openstackclient 2025-04-10 00:46:12.643475 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [0] - phpmyadmin 2025-04-10 00:46:12.643490 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [0] - common 2025-04-10 00:46:12.643513 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [1] -- loadbalancer 2025-04-10 00:46:12.643717 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [2] --- opensearch 2025-04-10 00:46:12.643743 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [2] --- mariadb-ng 2025-04-10 00:46:12.643758 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [3] ---- horizon 2025-04-10 00:46:12.643777 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [3] ---- keystone 2025-04-10 00:46:12.644122 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [4] ----- neutron 2025-04-10 00:46:12.644149 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [5] ------ wait-for-nova 2025-04-10 00:46:12.644164 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [5] ------ octavia 2025-04-10 00:46:12.644183 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [4] ----- barbican 2025-04-10 00:46:12.644392 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [4] ----- designate 2025-04-10 00:46:12.644421 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [4] ----- ironic 2025-04-10 00:46:12.645254 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [4] ----- placement 2025-04-10 00:46:12.645377 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [4] ----- magnum 2025-04-10 00:46:12.645410 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [1] -- openvswitch 2025-04-10 00:46:12.645480 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [2] --- ovn 2025-04-10 00:46:12.645496 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [1] -- memcached 2025-04-10 00:46:12.645509 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [1] -- redis 2025-04-10 00:46:12.645522 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [1] -- rabbitmq-ng 2025-04-10 00:46:12.645570 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [0] - kubernetes 2025-04-10 00:46:12.645639 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [1] -- kubeconfig 2025-04-10 00:46:12.645671 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [1] -- copy-kubeconfig 2025-04-10 00:46:12.645689 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [0] - ceph 2025-04-10 00:46:12.647018 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [1] -- ceph-pools 2025-04-10 00:46:12.647862 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [2] --- copy-ceph-keys 2025-04-10 00:46:12.647896 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [3] ---- cephclient 2025-04-10 00:46:12.780072 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-10 00:46:12.780168 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [4] ----- wait-for-keystone 2025-04-10 00:46:12.780183 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-10 00:46:12.780224 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [5] ------ glance 2025-04-10 00:46:12.780238 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [5] ------ cinder 2025-04-10 00:46:12.780251 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [5] ------ nova 2025-04-10 00:46:12.780264 | orchestrator | 2025-04-10 00:46:12 | INFO  | A [4] ----- prometheus 2025-04-10 00:46:12.780277 | orchestrator | 2025-04-10 00:46:12 | INFO  | D [5] ------ grafana 2025-04-10 00:46:12.780304 | orchestrator | 2025-04-10 00:46:12 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-10 00:46:14.853150 | orchestrator | 2025-04-10 00:46:12 | INFO  | Tasks are running in the background 2025-04-10 00:46:14.853291 | orchestrator | 2025-04-10 00:46:14 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-10 00:46:16.962794 | orchestrator | 2025-04-10 00:46:16 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:20.016120 | orchestrator | 2025-04-10 00:46:16 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:20.016328 | orchestrator | 2025-04-10 00:46:16 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:20.016349 | orchestrator | 2025-04-10 00:46:16 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:20.016365 | orchestrator | 2025-04-10 00:46:16 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:20.016379 | orchestrator | 2025-04-10 00:46:16 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:20.016394 | orchestrator | 2025-04-10 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:20.016426 | orchestrator | 2025-04-10 00:46:20 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:20.016514 | orchestrator | 2025-04-10 00:46:20 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:20.018208 | orchestrator | 2025-04-10 00:46:20 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:20.028592 | orchestrator | 2025-04-10 00:46:20 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:20.029076 | orchestrator | 2025-04-10 00:46:20 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:20.036186 | orchestrator | 2025-04-10 00:46:20 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:23.116494 | orchestrator | 2025-04-10 00:46:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:23.116583 | orchestrator | 2025-04-10 00:46:23 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:23.116918 | orchestrator | 2025-04-10 00:46:23 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:23.117471 | orchestrator | 2025-04-10 00:46:23 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:23.118514 | orchestrator | 2025-04-10 00:46:23 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:23.120810 | orchestrator | 2025-04-10 00:46:23 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:23.125493 | orchestrator | 2025-04-10 00:46:23 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:26.175139 | orchestrator | 2025-04-10 00:46:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:26.175287 | orchestrator | 2025-04-10 00:46:26 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:26.177658 | orchestrator | 2025-04-10 00:46:26 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:26.177766 | orchestrator | 2025-04-10 00:46:26 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:26.177786 | orchestrator | 2025-04-10 00:46:26 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:26.177800 | orchestrator | 2025-04-10 00:46:26 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:26.177821 | orchestrator | 2025-04-10 00:46:26 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:29.231151 | orchestrator | 2025-04-10 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:29.231293 | orchestrator | 2025-04-10 00:46:29 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:29.232553 | orchestrator | 2025-04-10 00:46:29 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:29.233565 | orchestrator | 2025-04-10 00:46:29 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:29.234556 | orchestrator | 2025-04-10 00:46:29 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:29.235375 | orchestrator | 2025-04-10 00:46:29 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:29.238283 | orchestrator | 2025-04-10 00:46:29 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:29.240787 | orchestrator | 2025-04-10 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:32.317632 | orchestrator | 2025-04-10 00:46:32 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:32.320054 | orchestrator | 2025-04-10 00:46:32 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:32.320169 | orchestrator | 2025-04-10 00:46:32 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:32.320871 | orchestrator | 2025-04-10 00:46:32 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:32.325519 | orchestrator | 2025-04-10 00:46:32 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:32.326708 | orchestrator | 2025-04-10 00:46:32 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:32.326813 | orchestrator | 2025-04-10 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:35.380245 | orchestrator | 2025-04-10 00:46:35 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:35.383476 | orchestrator | 2025-04-10 00:46:35 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:35.385724 | orchestrator | 2025-04-10 00:46:35 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state STARTED 2025-04-10 00:46:35.386276 | orchestrator | 2025-04-10 00:46:35 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:35.386592 | orchestrator | 2025-04-10 00:46:35 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:35.387433 | orchestrator | 2025-04-10 00:46:35 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:38.439021 | orchestrator | 2025-04-10 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:38.439170 | orchestrator | 2025-04-10 00:46:38 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:38.439389 | orchestrator | 2025-04-10 00:46:38 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:38.439420 | orchestrator | 2025-04-10 00:46:38.439436 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-10 00:46:38.439450 | orchestrator | 2025-04-10 00:46:38.439464 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-10 00:46:38.439479 | orchestrator | Thursday 10 April 2025 00:46:23 +0000 (0:00:00.199) 0:00:00.199 ******** 2025-04-10 00:46:38.439493 | orchestrator | changed: [testbed-manager] 2025-04-10 00:46:38.439508 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:46:38.439522 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:46:38.439535 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:46:38.439549 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:46:38.439563 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:46:38.439577 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:46:38.439590 | orchestrator | 2025-04-10 00:46:38.439604 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-10 00:46:38.439625 | orchestrator | Thursday 10 April 2025 00:46:27 +0000 (0:00:03.941) 0:00:04.141 ******** 2025-04-10 00:46:38.439640 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-10 00:46:38.439654 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-10 00:46:38.439673 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-10 00:46:38.439688 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-10 00:46:38.439701 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-10 00:46:38.439715 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-10 00:46:38.439729 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-10 00:46:38.439742 | orchestrator | 2025-04-10 00:46:38.439756 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-10 00:46:38.439770 | orchestrator | Thursday 10 April 2025 00:46:30 +0000 (0:00:02.498) 0:00:06.639 ******** 2025-04-10 00:46:38.439786 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:28.605747', 'end': '2025-04-10 00:46:28.613812', 'delta': '0:00:00.008065', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439809 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:28.575724', 'end': '2025-04-10 00:46:28.584960', 'delta': '0:00:00.009236', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439825 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:28.671150', 'end': '2025-04-10 00:46:28.684264', 'delta': '0:00:00.013114', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439877 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:28.954189', 'end': '2025-04-10 00:46:28.963727', 'delta': '0:00:00.009538', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439894 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:29.190940', 'end': '2025-04-10 00:46:29.197112', 'delta': '0:00:00.006172', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439908 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:29.446324', 'end': '2025-04-10 00:46:29.456350', 'delta': '0:00:00.010026', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439928 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-10 00:46:29.665801', 'end': '2025-04-10 00:46:29.674420', 'delta': '0:00:00.008619', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-10 00:46:38.439968 | orchestrator | 2025-04-10 00:46:38.439991 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-10 00:46:38.440006 | orchestrator | Thursday 10 April 2025 00:46:33 +0000 (0:00:03.572) 0:00:10.212 ******** 2025-04-10 00:46:38.440022 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-10 00:46:38.440038 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-10 00:46:38.440054 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-10 00:46:38.440070 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-10 00:46:38.440085 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-10 00:46:38.440101 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-10 00:46:38.440117 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-10 00:46:38.440132 | orchestrator | 2025-04-10 00:46:38.440148 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:46:38.440165 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440182 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440198 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440221 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440257 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440273 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440289 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:46:38.440304 | orchestrator | 2025-04-10 00:46:38.440320 | orchestrator | Thursday 10 April 2025 00:46:37 +0000 (0:00:03.597) 0:00:13.809 ******** 2025-04-10 00:46:38.440336 | orchestrator | =============================================================================== 2025-04-10 00:46:38.440352 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.94s 2025-04-10 00:46:38.440368 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.60s 2025-04-10 00:46:38.440381 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.57s 2025-04-10 00:46:38.440395 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.50s 2025-04-10 00:46:38.440413 | orchestrator | 2025-04-10 00:46:38 | INFO  | Task 90698315-3dd1-4ff8-acc2-ac8ea47ca413 is in state SUCCESS 2025-04-10 00:46:38.440557 | orchestrator | 2025-04-10 00:46:38 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:38.441062 | orchestrator | 2025-04-10 00:46:38 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:38.441709 | orchestrator | 2025-04-10 00:46:38 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:41.534179 | orchestrator | 2025-04-10 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:41.534272 | orchestrator | 2025-04-10 00:46:41 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:41.539946 | orchestrator | 2025-04-10 00:46:41 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:41.547139 | orchestrator | 2025-04-10 00:46:41 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:41.555255 | orchestrator | 2025-04-10 00:46:41 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:41.564708 | orchestrator | 2025-04-10 00:46:41 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:46:41.580866 | orchestrator | 2025-04-10 00:46:41 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:44.692109 | orchestrator | 2025-04-10 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:44.692251 | orchestrator | 2025-04-10 00:46:44 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:44.712412 | orchestrator | 2025-04-10 00:46:44 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:44.713803 | orchestrator | 2025-04-10 00:46:44 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:44.721134 | orchestrator | 2025-04-10 00:46:44 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:44.728423 | orchestrator | 2025-04-10 00:46:44 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:46:44.730252 | orchestrator | 2025-04-10 00:46:44 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:47.815789 | orchestrator | 2025-04-10 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:47.816009 | orchestrator | 2025-04-10 00:46:47 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:47.824636 | orchestrator | 2025-04-10 00:46:47 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:47.825501 | orchestrator | 2025-04-10 00:46:47 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:47.825530 | orchestrator | 2025-04-10 00:46:47 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:47.825554 | orchestrator | 2025-04-10 00:46:47 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:46:47.831466 | orchestrator | 2025-04-10 00:46:47 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:50.907060 | orchestrator | 2025-04-10 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:50.907259 | orchestrator | 2025-04-10 00:46:50 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:50.914298 | orchestrator | 2025-04-10 00:46:50 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:50.924340 | orchestrator | 2025-04-10 00:46:50 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:50.926242 | orchestrator | 2025-04-10 00:46:50 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:50.939776 | orchestrator | 2025-04-10 00:46:50 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:46:50.940383 | orchestrator | 2025-04-10 00:46:50 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:50.942090 | orchestrator | 2025-04-10 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:54.009683 | orchestrator | 2025-04-10 00:46:53 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:57.110365 | orchestrator | 2025-04-10 00:46:53 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:57.110496 | orchestrator | 2025-04-10 00:46:54 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:57.110543 | orchestrator | 2025-04-10 00:46:54 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:57.110561 | orchestrator | 2025-04-10 00:46:54 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:46:57.110576 | orchestrator | 2025-04-10 00:46:54 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:46:57.110591 | orchestrator | 2025-04-10 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:46:57.110623 | orchestrator | 2025-04-10 00:46:57 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:46:57.116164 | orchestrator | 2025-04-10 00:46:57 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:46:57.119049 | orchestrator | 2025-04-10 00:46:57 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:46:57.119092 | orchestrator | 2025-04-10 00:46:57 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state STARTED 2025-04-10 00:46:57.119116 | orchestrator | 2025-04-10 00:46:57 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:46:57.124742 | orchestrator | 2025-04-10 00:46:57 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:00.190802 | orchestrator | 2025-04-10 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:00.190994 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:00.201972 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:00.202823 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:00.208758 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task 7d3213b2-518e-4e18-be04-04b04b53b67a is in state SUCCESS 2025-04-10 00:47:00.213167 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:00.218089 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:00.232475 | orchestrator | 2025-04-10 00:47:00 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:03.293022 | orchestrator | 2025-04-10 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:03.293157 | orchestrator | 2025-04-10 00:47:03 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:03.294242 | orchestrator | 2025-04-10 00:47:03 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:03.295795 | orchestrator | 2025-04-10 00:47:03 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:03.298852 | orchestrator | 2025-04-10 00:47:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:03.300391 | orchestrator | 2025-04-10 00:47:03 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:06.377920 | orchestrator | 2025-04-10 00:47:03 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:06.378167 | orchestrator | 2025-04-10 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:06.378208 | orchestrator | 2025-04-10 00:47:06 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:06.387360 | orchestrator | 2025-04-10 00:47:06 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:06.394413 | orchestrator | 2025-04-10 00:47:06 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:06.400751 | orchestrator | 2025-04-10 00:47:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:06.404970 | orchestrator | 2025-04-10 00:47:06 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:06.407508 | orchestrator | 2025-04-10 00:47:06 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:09.478636 | orchestrator | 2025-04-10 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:09.478778 | orchestrator | 2025-04-10 00:47:09 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:09.480782 | orchestrator | 2025-04-10 00:47:09 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:09.481734 | orchestrator | 2025-04-10 00:47:09 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:09.484701 | orchestrator | 2025-04-10 00:47:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:09.489309 | orchestrator | 2025-04-10 00:47:09 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:09.493309 | orchestrator | 2025-04-10 00:47:09 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:09.493717 | orchestrator | 2025-04-10 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:12.558131 | orchestrator | 2025-04-10 00:47:12 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:12.562979 | orchestrator | 2025-04-10 00:47:12 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:12.566852 | orchestrator | 2025-04-10 00:47:12 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:12.568957 | orchestrator | 2025-04-10 00:47:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:12.580063 | orchestrator | 2025-04-10 00:47:12 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:15.633532 | orchestrator | 2025-04-10 00:47:12 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:15.633683 | orchestrator | 2025-04-10 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:15.633744 | orchestrator | 2025-04-10 00:47:15 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:15.636585 | orchestrator | 2025-04-10 00:47:15 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:15.636663 | orchestrator | 2025-04-10 00:47:15 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:15.639115 | orchestrator | 2025-04-10 00:47:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:15.639824 | orchestrator | 2025-04-10 00:47:15 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:15.639857 | orchestrator | 2025-04-10 00:47:15 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:18.735094 | orchestrator | 2025-04-10 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:18.735251 | orchestrator | 2025-04-10 00:47:18 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:18.736159 | orchestrator | 2025-04-10 00:47:18 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:18.742831 | orchestrator | 2025-04-10 00:47:18 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:18.744161 | orchestrator | 2025-04-10 00:47:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:18.751390 | orchestrator | 2025-04-10 00:47:18 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:21.825553 | orchestrator | 2025-04-10 00:47:18 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:21.825657 | orchestrator | 2025-04-10 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:21.825685 | orchestrator | 2025-04-10 00:47:21 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:21.827627 | orchestrator | 2025-04-10 00:47:21 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:21.839045 | orchestrator | 2025-04-10 00:47:21 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:21.854619 | orchestrator | 2025-04-10 00:47:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:21.856878 | orchestrator | 2025-04-10 00:47:21 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:21.863289 | orchestrator | 2025-04-10 00:47:21 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:21.863777 | orchestrator | 2025-04-10 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:24.955153 | orchestrator | 2025-04-10 00:47:24 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:24.962312 | orchestrator | 2025-04-10 00:47:24 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:24.962526 | orchestrator | 2025-04-10 00:47:24 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:24.966819 | orchestrator | 2025-04-10 00:47:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:24.970091 | orchestrator | 2025-04-10 00:47:24 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:24.975069 | orchestrator | 2025-04-10 00:47:24 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:28.083655 | orchestrator | 2025-04-10 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:28.083826 | orchestrator | 2025-04-10 00:47:28 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state STARTED 2025-04-10 00:47:28.100365 | orchestrator | 2025-04-10 00:47:28 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:28.107159 | orchestrator | 2025-04-10 00:47:28 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:28.115119 | orchestrator | 2025-04-10 00:47:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:28.118742 | orchestrator | 2025-04-10 00:47:28 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:28.124586 | orchestrator | 2025-04-10 00:47:28 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:28.126376 | orchestrator | 2025-04-10 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:31.185445 | orchestrator | 2025-04-10 00:47:31 | INFO  | Task fd630575-2f40-4413-8f32-5b7367e97077 is in state SUCCESS 2025-04-10 00:47:31.191442 | orchestrator | 2025-04-10 00:47:31 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:31.191526 | orchestrator | 2025-04-10 00:47:31 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:31.191564 | orchestrator | 2025-04-10 00:47:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:31.191615 | orchestrator | 2025-04-10 00:47:31 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:31.191640 | orchestrator | 2025-04-10 00:47:31 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:34.243917 | orchestrator | 2025-04-10 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:34.244137 | orchestrator | 2025-04-10 00:47:34 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:34.244753 | orchestrator | 2025-04-10 00:47:34 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:34.246782 | orchestrator | 2025-04-10 00:47:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:34.256581 | orchestrator | 2025-04-10 00:47:34 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:37.336865 | orchestrator | 2025-04-10 00:47:34 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:37.337059 | orchestrator | 2025-04-10 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:37.337102 | orchestrator | 2025-04-10 00:47:37 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:37.341789 | orchestrator | 2025-04-10 00:47:37 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:37.341831 | orchestrator | 2025-04-10 00:47:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:37.341857 | orchestrator | 2025-04-10 00:47:37 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:37.350565 | orchestrator | 2025-04-10 00:47:37 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:40.394845 | orchestrator | 2025-04-10 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:40.395015 | orchestrator | 2025-04-10 00:47:40 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state STARTED 2025-04-10 00:47:40.395833 | orchestrator | 2025-04-10 00:47:40 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:40.396852 | orchestrator | 2025-04-10 00:47:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:40.398309 | orchestrator | 2025-04-10 00:47:40 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:40.399542 | orchestrator | 2025-04-10 00:47:40 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:40.399649 | orchestrator | 2025-04-10 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:43.452662 | orchestrator | 2025-04-10 00:47:43.452778 | orchestrator | 2025-04-10 00:47:43.452798 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-10 00:47:43.452814 | orchestrator | 2025-04-10 00:47:43.452829 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-10 00:47:43.452844 | orchestrator | Thursday 10 April 2025 00:46:21 +0000 (0:00:00.495) 0:00:00.495 ******** 2025-04-10 00:47:43.452858 | orchestrator | ok: [testbed-manager] => { 2025-04-10 00:47:43.452874 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-10 00:47:43.452889 | orchestrator | } 2025-04-10 00:47:43.452903 | orchestrator | 2025-04-10 00:47:43.452918 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-10 00:47:43.452980 | orchestrator | Thursday 10 April 2025 00:46:21 +0000 (0:00:00.319) 0:00:00.815 ******** 2025-04-10 00:47:43.452996 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.453011 | orchestrator | 2025-04-10 00:47:43.453025 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-10 00:47:43.453039 | orchestrator | Thursday 10 April 2025 00:46:23 +0000 (0:00:01.899) 0:00:02.715 ******** 2025-04-10 00:47:43.453053 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-10 00:47:43.453067 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-10 00:47:43.453081 | orchestrator | 2025-04-10 00:47:43.453095 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-10 00:47:43.453109 | orchestrator | Thursday 10 April 2025 00:46:24 +0000 (0:00:01.235) 0:00:03.950 ******** 2025-04-10 00:47:43.453123 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.453138 | orchestrator | 2025-04-10 00:47:43.453152 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-10 00:47:43.453167 | orchestrator | Thursday 10 April 2025 00:46:26 +0000 (0:00:02.234) 0:00:06.185 ******** 2025-04-10 00:47:43.453183 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.453198 | orchestrator | 2025-04-10 00:47:43.453214 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-10 00:47:43.453229 | orchestrator | Thursday 10 April 2025 00:46:28 +0000 (0:00:01.531) 0:00:07.716 ******** 2025-04-10 00:47:43.453244 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-10 00:47:43.453260 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.453276 | orchestrator | 2025-04-10 00:47:43.453291 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-10 00:47:43.453306 | orchestrator | Thursday 10 April 2025 00:46:54 +0000 (0:00:26.146) 0:00:33.863 ******** 2025-04-10 00:47:43.453321 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.453337 | orchestrator | 2025-04-10 00:47:43.453353 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:47:43.453368 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.453385 | orchestrator | 2025-04-10 00:47:43.453401 | orchestrator | Thursday 10 April 2025 00:46:57 +0000 (0:00:03.202) 0:00:37.065 ******** 2025-04-10 00:47:43.453416 | orchestrator | =============================================================================== 2025-04-10 00:47:43.453432 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.15s 2025-04-10 00:47:43.453448 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.20s 2025-04-10 00:47:43.453464 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.23s 2025-04-10 00:47:43.453486 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.90s 2025-04-10 00:47:43.453502 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.53s 2025-04-10 00:47:43.453518 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.24s 2025-04-10 00:47:43.453534 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.32s 2025-04-10 00:47:43.453549 | orchestrator | 2025-04-10 00:47:43.453563 | orchestrator | 2025-04-10 00:47:43.453577 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-10 00:47:43.453590 | orchestrator | 2025-04-10 00:47:43.453604 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-10 00:47:43.453618 | orchestrator | Thursday 10 April 2025 00:46:23 +0000 (0:00:00.736) 0:00:00.736 ******** 2025-04-10 00:47:43.453632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-10 00:47:43.453647 | orchestrator | 2025-04-10 00:47:43.453661 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-10 00:47:43.453681 | orchestrator | Thursday 10 April 2025 00:46:23 +0000 (0:00:00.471) 0:00:01.207 ******** 2025-04-10 00:47:43.453696 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-10 00:47:43.453710 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-10 00:47:43.453723 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-10 00:47:43.453738 | orchestrator | 2025-04-10 00:47:43.453751 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-10 00:47:43.453765 | orchestrator | Thursday 10 April 2025 00:46:25 +0000 (0:00:01.790) 0:00:02.997 ******** 2025-04-10 00:47:43.453779 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.453793 | orchestrator | 2025-04-10 00:47:43.453807 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-10 00:47:43.453821 | orchestrator | Thursday 10 April 2025 00:46:27 +0000 (0:00:01.531) 0:00:04.529 ******** 2025-04-10 00:47:43.453835 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-10 00:47:43.453849 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.453863 | orchestrator | 2025-04-10 00:47:43.453890 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-10 00:47:43.453905 | orchestrator | Thursday 10 April 2025 00:47:17 +0000 (0:00:50.147) 0:00:54.680 ******** 2025-04-10 00:47:43.453919 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.453953 | orchestrator | 2025-04-10 00:47:43.453968 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-10 00:47:43.453982 | orchestrator | Thursday 10 April 2025 00:47:20 +0000 (0:00:02.878) 0:00:57.558 ******** 2025-04-10 00:47:43.453995 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.454009 | orchestrator | 2025-04-10 00:47:43.454123 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-10 00:47:43.454139 | orchestrator | Thursday 10 April 2025 00:47:21 +0000 (0:00:01.254) 0:00:58.813 ******** 2025-04-10 00:47:43.454153 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.454167 | orchestrator | 2025-04-10 00:47:43.454182 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-10 00:47:43.454195 | orchestrator | Thursday 10 April 2025 00:47:24 +0000 (0:00:02.954) 0:01:01.768 ******** 2025-04-10 00:47:43.454210 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.454224 | orchestrator | 2025-04-10 00:47:43.454238 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-10 00:47:43.454252 | orchestrator | Thursday 10 April 2025 00:47:26 +0000 (0:00:02.105) 0:01:03.874 ******** 2025-04-10 00:47:43.454266 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.454280 | orchestrator | 2025-04-10 00:47:43.454294 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-10 00:47:43.454308 | orchestrator | Thursday 10 April 2025 00:47:27 +0000 (0:00:01.359) 0:01:05.234 ******** 2025-04-10 00:47:43.454321 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.454335 | orchestrator | 2025-04-10 00:47:43.454349 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:47:43.454363 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.454377 | orchestrator | 2025-04-10 00:47:43.454391 | orchestrator | Thursday 10 April 2025 00:47:28 +0000 (0:00:00.539) 0:01:05.773 ******** 2025-04-10 00:47:43.454405 | orchestrator | =============================================================================== 2025-04-10 00:47:43.454419 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 50.17s 2025-04-10 00:47:43.454433 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.96s 2025-04-10 00:47:43.454447 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.85s 2025-04-10 00:47:43.454467 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.10s 2025-04-10 00:47:43.454521 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.79s 2025-04-10 00:47:43.454535 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.54s 2025-04-10 00:47:43.454549 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.36s 2025-04-10 00:47:43.454563 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.25s 2025-04-10 00:47:43.454577 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.54s 2025-04-10 00:47:43.454591 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.47s 2025-04-10 00:47:43.454605 | orchestrator | 2025-04-10 00:47:43.454618 | orchestrator | 2025-04-10 00:47:43.454632 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:47:43.454646 | orchestrator | 2025-04-10 00:47:43.454660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:47:43.454674 | orchestrator | Thursday 10 April 2025 00:46:21 +0000 (0:00:00.290) 0:00:00.290 ******** 2025-04-10 00:47:43.454688 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-10 00:47:43.454702 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-10 00:47:43.454715 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-10 00:47:43.454729 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-10 00:47:43.454743 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-10 00:47:43.454757 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-10 00:47:43.454771 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-10 00:47:43.454785 | orchestrator | 2025-04-10 00:47:43.454799 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-10 00:47:43.454813 | orchestrator | 2025-04-10 00:47:43.454827 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-10 00:47:43.454841 | orchestrator | Thursday 10 April 2025 00:46:24 +0000 (0:00:02.511) 0:00:02.801 ******** 2025-04-10 00:47:43.454869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:47:43.454885 | orchestrator | 2025-04-10 00:47:43.454899 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-10 00:47:43.454913 | orchestrator | Thursday 10 April 2025 00:46:26 +0000 (0:00:02.130) 0:00:04.931 ******** 2025-04-10 00:47:43.454963 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.454979 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:47:43.454993 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:47:43.455007 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:47:43.455020 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:47:43.455034 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:47:43.455048 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:47:43.455061 | orchestrator | 2025-04-10 00:47:43.455075 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-10 00:47:43.455098 | orchestrator | Thursday 10 April 2025 00:46:29 +0000 (0:00:03.016) 0:00:07.948 ******** 2025-04-10 00:47:43.455112 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:47:43.455126 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.455140 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:47:43.455153 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:47:43.455167 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:47:43.455180 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:47:43.455194 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:47:43.455208 | orchestrator | 2025-04-10 00:47:43.455222 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-10 00:47:43.455236 | orchestrator | Thursday 10 April 2025 00:46:34 +0000 (0:00:05.038) 0:00:12.986 ******** 2025-04-10 00:47:43.455249 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.455280 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:47:43.455294 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:47:43.455308 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:47:43.455321 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:47:43.455335 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:47:43.455349 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:47:43.455362 | orchestrator | 2025-04-10 00:47:43.455376 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-10 00:47:43.455390 | orchestrator | Thursday 10 April 2025 00:46:37 +0000 (0:00:03.079) 0:00:16.065 ******** 2025-04-10 00:47:43.455404 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:47:43.455418 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:47:43.455432 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:47:43.455445 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:47:43.455459 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:47:43.455472 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:47:43.455486 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.455500 | orchestrator | 2025-04-10 00:47:43.455514 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-10 00:47:43.455528 | orchestrator | Thursday 10 April 2025 00:46:47 +0000 (0:00:09.463) 0:00:25.528 ******** 2025-04-10 00:47:43.455541 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:47:43.455555 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:47:43.455568 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:47:43.455582 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:47:43.455596 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:47:43.455609 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:47:43.455623 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.455637 | orchestrator | 2025-04-10 00:47:43.455651 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-10 00:47:43.455665 | orchestrator | Thursday 10 April 2025 00:47:08 +0000 (0:00:21.742) 0:00:47.271 ******** 2025-04-10 00:47:43.455680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:47:43.455698 | orchestrator | 2025-04-10 00:47:43.455712 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-10 00:47:43.455726 | orchestrator | Thursday 10 April 2025 00:47:11 +0000 (0:00:02.315) 0:00:49.587 ******** 2025-04-10 00:47:43.455740 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-10 00:47:43.455754 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-10 00:47:43.455767 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-10 00:47:43.455781 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-10 00:47:43.455795 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-10 00:47:43.455809 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-10 00:47:43.455822 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-10 00:47:43.455836 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-10 00:47:43.455850 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-10 00:47:43.455863 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-10 00:47:43.455877 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-10 00:47:43.455946 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-10 00:47:43.455962 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-10 00:47:43.455975 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-10 00:47:43.455990 | orchestrator | 2025-04-10 00:47:43.456004 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-10 00:47:43.456018 | orchestrator | Thursday 10 April 2025 00:47:21 +0000 (0:00:09.961) 0:00:59.549 ******** 2025-04-10 00:47:43.456039 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.456054 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:47:43.456068 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:47:43.456082 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:47:43.456096 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:47:43.456110 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:47:43.456123 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:47:43.456137 | orchestrator | 2025-04-10 00:47:43.456151 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-10 00:47:43.456165 | orchestrator | Thursday 10 April 2025 00:47:23 +0000 (0:00:02.768) 0:01:02.318 ******** 2025-04-10 00:47:43.456179 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:47:43.456193 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.456207 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:47:43.456221 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:47:43.456234 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:47:43.456248 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:47:43.456262 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:47:43.456275 | orchestrator | 2025-04-10 00:47:43.456289 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-10 00:47:43.456308 | orchestrator | Thursday 10 April 2025 00:47:28 +0000 (0:00:04.278) 0:01:06.596 ******** 2025-04-10 00:47:43.456322 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:47:43.456337 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.456350 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:47:43.456364 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:47:43.456384 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:47:43.456399 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:47:43.456413 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:47:43.456427 | orchestrator | 2025-04-10 00:47:43.456441 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-10 00:47:43.456455 | orchestrator | Thursday 10 April 2025 00:47:30 +0000 (0:00:02.407) 0:01:09.004 ******** 2025-04-10 00:47:43.456469 | orchestrator | ok: [testbed-manager] 2025-04-10 00:47:43.456483 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:47:43.456497 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:47:43.456510 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:47:43.456524 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:47:43.456538 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:47:43.456551 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:47:43.456565 | orchestrator | 2025-04-10 00:47:43.456579 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-10 00:47:43.456593 | orchestrator | Thursday 10 April 2025 00:47:33 +0000 (0:00:03.314) 0:01:12.319 ******** 2025-04-10 00:47:43.456607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-10 00:47:43.456623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:47:43.456637 | orchestrator | 2025-04-10 00:47:43.456651 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-10 00:47:43.456664 | orchestrator | Thursday 10 April 2025 00:47:36 +0000 (0:00:02.373) 0:01:14.692 ******** 2025-04-10 00:47:43.456678 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.456692 | orchestrator | 2025-04-10 00:47:43.456706 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-10 00:47:43.456720 | orchestrator | Thursday 10 April 2025 00:47:39 +0000 (0:00:02.887) 0:01:17.579 ******** 2025-04-10 00:47:43.456733 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:47:43.456747 | orchestrator | changed: [testbed-manager] 2025-04-10 00:47:43.456762 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:47:43.456784 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:47:43.456799 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:47:43.456814 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:47:43.456834 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:47:43.456848 | orchestrator | 2025-04-10 00:47:43.456862 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:47:43.456876 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456891 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456905 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456941 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456957 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456971 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456986 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:47:43.456999 | orchestrator | 2025-04-10 00:47:43.457013 | orchestrator | Thursday 10 April 2025 00:47:42 +0000 (0:00:03.118) 0:01:20.698 ******** 2025-04-10 00:47:43.457027 | orchestrator | =============================================================================== 2025-04-10 00:47:43.457042 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 21.74s 2025-04-10 00:47:43.457056 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 9.96s 2025-04-10 00:47:43.457070 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.46s 2025-04-10 00:47:43.457084 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.04s 2025-04-10 00:47:43.457097 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 4.28s 2025-04-10 00:47:43.457111 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.31s 2025-04-10 00:47:43.457125 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.12s 2025-04-10 00:47:43.457139 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.08s 2025-04-10 00:47:43.457153 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.01s 2025-04-10 00:47:43.457166 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.89s 2025-04-10 00:47:43.457180 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.77s 2025-04-10 00:47:43.457194 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.51s 2025-04-10 00:47:43.457208 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.41s 2025-04-10 00:47:43.457222 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.37s 2025-04-10 00:47:43.457242 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.32s 2025-04-10 00:47:43.457335 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.13s 2025-04-10 00:47:43.457354 | orchestrator | 2025-04-10 00:47:43 | INFO  | Task d4f772ae-07cf-4ac1-8dfa-c71982deb9cc is in state SUCCESS 2025-04-10 00:47:43.457372 | orchestrator | 2025-04-10 00:47:43 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:43.459602 | orchestrator | 2025-04-10 00:47:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:43.466125 | orchestrator | 2025-04-10 00:47:43 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:43.469996 | orchestrator | 2025-04-10 00:47:43 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:46.571798 | orchestrator | 2025-04-10 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:46.571971 | orchestrator | 2025-04-10 00:47:46 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:46.572327 | orchestrator | 2025-04-10 00:47:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:46.573463 | orchestrator | 2025-04-10 00:47:46 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:46.575511 | orchestrator | 2025-04-10 00:47:46 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:46.575595 | orchestrator | 2025-04-10 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:49.624165 | orchestrator | 2025-04-10 00:47:49 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:49.624424 | orchestrator | 2025-04-10 00:47:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:49.624905 | orchestrator | 2025-04-10 00:47:49 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:49.625842 | orchestrator | 2025-04-10 00:47:49 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:52.688321 | orchestrator | 2025-04-10 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:52.688458 | orchestrator | 2025-04-10 00:47:52 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:52.691284 | orchestrator | 2025-04-10 00:47:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:52.694826 | orchestrator | 2025-04-10 00:47:52 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:52.696468 | orchestrator | 2025-04-10 00:47:52 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:55.754849 | orchestrator | 2025-04-10 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:55.755021 | orchestrator | 2025-04-10 00:47:55 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:55.755109 | orchestrator | 2025-04-10 00:47:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:55.755134 | orchestrator | 2025-04-10 00:47:55 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state STARTED 2025-04-10 00:47:55.758384 | orchestrator | 2025-04-10 00:47:55 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:47:58.827285 | orchestrator | 2025-04-10 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:47:58.827457 | orchestrator | 2025-04-10 00:47:58 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:47:58.828253 | orchestrator | 2025-04-10 00:47:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:47:58.829784 | orchestrator | 2025-04-10 00:47:58 | INFO  | Task 6c693d45-c5ab-482d-95f7-f6dcc59a9cd9 is in state SUCCESS 2025-04-10 00:47:58.840617 | orchestrator | 2025-04-10 00:47:58 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:01.884172 | orchestrator | 2025-04-10 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:01.884314 | orchestrator | 2025-04-10 00:48:01 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:04.961649 | orchestrator | 2025-04-10 00:48:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:04.961807 | orchestrator | 2025-04-10 00:48:01 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:04.961829 | orchestrator | 2025-04-10 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:04.961863 | orchestrator | 2025-04-10 00:48:04 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:08.013610 | orchestrator | 2025-04-10 00:48:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:08.013734 | orchestrator | 2025-04-10 00:48:04 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:08.013754 | orchestrator | 2025-04-10 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:08.013790 | orchestrator | 2025-04-10 00:48:08 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:08.014846 | orchestrator | 2025-04-10 00:48:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:08.015066 | orchestrator | 2025-04-10 00:48:08 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:11.088914 | orchestrator | 2025-04-10 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:11.089105 | orchestrator | 2025-04-10 00:48:11 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:11.091390 | orchestrator | 2025-04-10 00:48:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:11.093614 | orchestrator | 2025-04-10 00:48:11 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:11.093832 | orchestrator | 2025-04-10 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:14.144042 | orchestrator | 2025-04-10 00:48:14 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:14.149040 | orchestrator | 2025-04-10 00:48:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:14.155804 | orchestrator | 2025-04-10 00:48:14 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:14.156847 | orchestrator | 2025-04-10 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:17.200904 | orchestrator | 2025-04-10 00:48:17 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:17.201395 | orchestrator | 2025-04-10 00:48:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:17.203139 | orchestrator | 2025-04-10 00:48:17 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:20.268738 | orchestrator | 2025-04-10 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:20.268879 | orchestrator | 2025-04-10 00:48:20 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:20.270996 | orchestrator | 2025-04-10 00:48:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:20.273490 | orchestrator | 2025-04-10 00:48:20 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:23.318332 | orchestrator | 2025-04-10 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:23.318475 | orchestrator | 2025-04-10 00:48:23 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:23.319259 | orchestrator | 2025-04-10 00:48:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:23.322631 | orchestrator | 2025-04-10 00:48:23 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:26.371678 | orchestrator | 2025-04-10 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:26.371859 | orchestrator | 2025-04-10 00:48:26 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:26.375404 | orchestrator | 2025-04-10 00:48:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:26.378607 | orchestrator | 2025-04-10 00:48:26 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:29.417575 | orchestrator | 2025-04-10 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:29.417719 | orchestrator | 2025-04-10 00:48:29 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:29.419089 | orchestrator | 2025-04-10 00:48:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:29.419708 | orchestrator | 2025-04-10 00:48:29 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:29.419976 | orchestrator | 2025-04-10 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:32.459019 | orchestrator | 2025-04-10 00:48:32 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:32.459788 | orchestrator | 2025-04-10 00:48:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:32.461361 | orchestrator | 2025-04-10 00:48:32 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:35.510988 | orchestrator | 2025-04-10 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:35.511173 | orchestrator | 2025-04-10 00:48:35 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:35.511321 | orchestrator | 2025-04-10 00:48:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:35.511368 | orchestrator | 2025-04-10 00:48:35 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:38.565213 | orchestrator | 2025-04-10 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:38.565358 | orchestrator | 2025-04-10 00:48:38 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:38.566800 | orchestrator | 2025-04-10 00:48:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:38.569153 | orchestrator | 2025-04-10 00:48:38 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:38.569330 | orchestrator | 2025-04-10 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:41.614070 | orchestrator | 2025-04-10 00:48:41 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:41.614796 | orchestrator | 2025-04-10 00:48:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:41.616154 | orchestrator | 2025-04-10 00:48:41 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:41.616272 | orchestrator | 2025-04-10 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:44.654133 | orchestrator | 2025-04-10 00:48:44 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:44.654365 | orchestrator | 2025-04-10 00:48:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:44.654397 | orchestrator | 2025-04-10 00:48:44 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:47.702579 | orchestrator | 2025-04-10 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:47.702758 | orchestrator | 2025-04-10 00:48:47 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:47.704169 | orchestrator | 2025-04-10 00:48:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:47.704237 | orchestrator | 2025-04-10 00:48:47 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:50.762450 | orchestrator | 2025-04-10 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:50.762625 | orchestrator | 2025-04-10 00:48:50 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:50.763178 | orchestrator | 2025-04-10 00:48:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:50.764193 | orchestrator | 2025-04-10 00:48:50 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:53.823310 | orchestrator | 2025-04-10 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:53.823486 | orchestrator | 2025-04-10 00:48:53 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:53.824510 | orchestrator | 2025-04-10 00:48:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:53.826535 | orchestrator | 2025-04-10 00:48:53 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:56.878122 | orchestrator | 2025-04-10 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:56.878332 | orchestrator | 2025-04-10 00:48:56 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state STARTED 2025-04-10 00:48:56.879644 | orchestrator | 2025-04-10 00:48:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:56.879814 | orchestrator | 2025-04-10 00:48:56 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:59.943017 | orchestrator | 2025-04-10 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:48:59.943190 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:48:59.945369 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:48:59.945441 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task 8d451614-13db-4639-a562-c1d4b40abb51 is in state SUCCESS 2025-04-10 00:48:59.951604 | orchestrator | 2025-04-10 00:48:59.951735 | orchestrator | 2025-04-10 00:48:59.951754 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-10 00:48:59.951771 | orchestrator | 2025-04-10 00:48:59.951787 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-10 00:48:59.951802 | orchestrator | Thursday 10 April 2025 00:46:44 +0000 (0:00:00.395) 0:00:00.395 ******** 2025-04-10 00:48:59.951818 | orchestrator | ok: [testbed-manager] 2025-04-10 00:48:59.951835 | orchestrator | 2025-04-10 00:48:59.951851 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-10 00:48:59.951866 | orchestrator | Thursday 10 April 2025 00:46:46 +0000 (0:00:01.378) 0:00:01.775 ******** 2025-04-10 00:48:59.951882 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-10 00:48:59.951904 | orchestrator | 2025-04-10 00:48:59.951956 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-10 00:48:59.951972 | orchestrator | Thursday 10 April 2025 00:46:47 +0000 (0:00:00.940) 0:00:02.716 ******** 2025-04-10 00:48:59.951986 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.952000 | orchestrator | 2025-04-10 00:48:59.952041 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-10 00:48:59.952056 | orchestrator | Thursday 10 April 2025 00:46:49 +0000 (0:00:02.701) 0:00:05.417 ******** 2025-04-10 00:48:59.952070 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-10 00:48:59.952084 | orchestrator | ok: [testbed-manager] 2025-04-10 00:48:59.952098 | orchestrator | 2025-04-10 00:48:59.952112 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-10 00:48:59.952126 | orchestrator | Thursday 10 April 2025 00:47:52 +0000 (0:01:02.792) 0:01:08.209 ******** 2025-04-10 00:48:59.952143 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.952159 | orchestrator | 2025-04-10 00:48:59.952175 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:48:59.952190 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:48:59.952208 | orchestrator | 2025-04-10 00:48:59.952223 | orchestrator | Thursday 10 April 2025 00:47:56 +0000 (0:00:03.945) 0:01:12.155 ******** 2025-04-10 00:48:59.952240 | orchestrator | =============================================================================== 2025-04-10 00:48:59.952255 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.79s 2025-04-10 00:48:59.952271 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.95s 2025-04-10 00:48:59.952286 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.70s 2025-04-10 00:48:59.952302 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.38s 2025-04-10 00:48:59.952318 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.94s 2025-04-10 00:48:59.952333 | orchestrator | 2025-04-10 00:48:59.952349 | orchestrator | 2025-04-10 00:48:59.952365 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-10 00:48:59.952380 | orchestrator | 2025-04-10 00:48:59.952396 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-10 00:48:59.952411 | orchestrator | Thursday 10 April 2025 00:46:16 +0000 (0:00:00.368) 0:00:00.368 ******** 2025-04-10 00:48:59.952427 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:48:59.952444 | orchestrator | 2025-04-10 00:48:59.952460 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-10 00:48:59.952476 | orchestrator | Thursday 10 April 2025 00:46:18 +0000 (0:00:02.416) 0:00:02.784 ******** 2025-04-10 00:48:59.952492 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952507 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952523 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952537 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952551 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952565 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952578 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952592 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952608 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952622 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-10 00:48:59.952636 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952649 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952663 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952689 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952703 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-10 00:48:59.952717 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952731 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952757 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952773 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952787 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952802 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-10 00:48:59.952816 | orchestrator | 2025-04-10 00:48:59.952829 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-10 00:48:59.952844 | orchestrator | Thursday 10 April 2025 00:46:23 +0000 (0:00:05.034) 0:00:07.820 ******** 2025-04-10 00:48:59.952858 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:48:59.952880 | orchestrator | 2025-04-10 00:48:59.952894 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-10 00:48:59.952908 | orchestrator | Thursday 10 April 2025 00:46:25 +0000 (0:00:01.655) 0:00:09.475 ******** 2025-04-10 00:48:59.952946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.952964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.952980 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.952995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.953009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.953030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.953053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.953069 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953196 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.953321 | orchestrator | 2025-04-10 00:48:59.953336 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-10 00:48:59.953350 | orchestrator | Thursday 10 April 2025 00:46:31 +0000 (0:00:05.812) 0:00:15.287 ******** 2025-04-10 00:48:59.953371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953437 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953459 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953473 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:48:59.953488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953593 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:48:59.953608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953651 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:48:59.953665 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:48:59.953679 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:48:59.953700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953745 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:48:59.953759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953809 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:48:59.953823 | orchestrator | 2025-04-10 00:48:59.953838 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-10 00:48:59.953852 | orchestrator | Thursday 10 April 2025 00:46:33 +0000 (0:00:02.659) 0:00:17.946 ******** 2025-04-10 00:48:59.953866 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.953888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953903 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.953982 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:48:59.954000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.954125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954162 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:48:59.954176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.954190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.954242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954276 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:48:59.954288 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:48:59.954300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.954319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954345 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:48:59.954358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.954377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954414 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:48:59.954427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-10 00:48:59.954440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.954466 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:48:59.954478 | orchestrator | 2025-04-10 00:48:59.954491 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-10 00:48:59.954503 | orchestrator | Thursday 10 April 2025 00:46:37 +0000 (0:00:03.123) 0:00:21.070 ******** 2025-04-10 00:48:59.954516 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:48:59.954528 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:48:59.954540 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:48:59.954552 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:48:59.954565 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:48:59.954577 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:48:59.954589 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:48:59.954601 | orchestrator | 2025-04-10 00:48:59.954614 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-10 00:48:59.954627 | orchestrator | Thursday 10 April 2025 00:46:37 +0000 (0:00:00.957) 0:00:22.027 ******** 2025-04-10 00:48:59.954639 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:48:59.954651 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:48:59.954663 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:48:59.954675 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:48:59.954688 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:48:59.954700 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:48:59.954712 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:48:59.954748 | orchestrator | 2025-04-10 00:48:59.954762 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-10 00:48:59.954774 | orchestrator | Thursday 10 April 2025 00:46:39 +0000 (0:00:01.099) 0:00:23.127 ******** 2025-04-10 00:48:59.954787 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:48:59.954799 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.954811 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.954823 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.954836 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.954848 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.954860 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.954873 | orchestrator | 2025-04-10 00:48:59.954885 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-10 00:48:59.954904 | orchestrator | Thursday 10 April 2025 00:47:24 +0000 (0:00:45.465) 0:01:08.593 ******** 2025-04-10 00:48:59.954936 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:48:59.954955 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:48:59.954968 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:48:59.954981 | orchestrator | ok: [testbed-manager] 2025-04-10 00:48:59.954993 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:48:59.955005 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:48:59.955017 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:48:59.955035 | orchestrator | 2025-04-10 00:48:59.955048 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-10 00:48:59.955060 | orchestrator | Thursday 10 April 2025 00:47:29 +0000 (0:00:04.556) 0:01:13.150 ******** 2025-04-10 00:48:59.955072 | orchestrator | ok: [testbed-manager] 2025-04-10 00:48:59.955085 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:48:59.955097 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:48:59.955109 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:48:59.955122 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:48:59.955134 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:48:59.955146 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:48:59.955158 | orchestrator | 2025-04-10 00:48:59.955171 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-10 00:48:59.955183 | orchestrator | Thursday 10 April 2025 00:47:30 +0000 (0:00:01.287) 0:01:14.438 ******** 2025-04-10 00:48:59.955195 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:48:59.955208 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:48:59.955220 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:48:59.955232 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:48:59.955244 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:48:59.955256 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:48:59.955268 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:48:59.955280 | orchestrator | 2025-04-10 00:48:59.955293 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-10 00:48:59.955305 | orchestrator | Thursday 10 April 2025 00:47:32 +0000 (0:00:01.632) 0:01:16.070 ******** 2025-04-10 00:48:59.955318 | orchestrator | skipping: [testbed-manager] 2025-04-10 00:48:59.955330 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:48:59.955342 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:48:59.955354 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:48:59.955366 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:48:59.955379 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:48:59.955391 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:48:59.955403 | orchestrator | 2025-04-10 00:48:59.955415 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-10 00:48:59.955427 | orchestrator | Thursday 10 April 2025 00:47:33 +0000 (0:00:01.349) 0:01:17.420 ******** 2025-04-10 00:48:59.955440 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955531 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955640 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.955703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.955815 | orchestrator | 2025-04-10 00:48:59.955828 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-10 00:48:59.955840 | orchestrator | Thursday 10 April 2025 00:47:39 +0000 (0:00:06.528) 0:01:23.948 ******** 2025-04-10 00:48:59.955853 | orchestrator | [WARNING]: Skipped 2025-04-10 00:48:59.955866 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-10 00:48:59.955878 | orchestrator | to this access issue: 2025-04-10 00:48:59.955891 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-10 00:48:59.955903 | orchestrator | directory 2025-04-10 00:48:59.955966 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 00:48:59.955982 | orchestrator | 2025-04-10 00:48:59.955995 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-10 00:48:59.956008 | orchestrator | Thursday 10 April 2025 00:47:41 +0000 (0:00:01.111) 0:01:25.059 ******** 2025-04-10 00:48:59.956020 | orchestrator | [WARNING]: Skipped 2025-04-10 00:48:59.956038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-10 00:48:59.956051 | orchestrator | to this access issue: 2025-04-10 00:48:59.956063 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-10 00:48:59.956075 | orchestrator | directory 2025-04-10 00:48:59.956088 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 00:48:59.956100 | orchestrator | 2025-04-10 00:48:59.956112 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-10 00:48:59.956125 | orchestrator | Thursday 10 April 2025 00:47:41 +0000 (0:00:00.633) 0:01:25.693 ******** 2025-04-10 00:48:59.956144 | orchestrator | [WARNING]: Skipped 2025-04-10 00:48:59.956156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-10 00:48:59.956168 | orchestrator | to this access issue: 2025-04-10 00:48:59.956181 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-10 00:48:59.956193 | orchestrator | directory 2025-04-10 00:48:59.956205 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 00:48:59.956216 | orchestrator | 2025-04-10 00:48:59.956226 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-10 00:48:59.956236 | orchestrator | Thursday 10 April 2025 00:47:42 +0000 (0:00:00.577) 0:01:26.270 ******** 2025-04-10 00:48:59.956246 | orchestrator | [WARNING]: Skipped 2025-04-10 00:48:59.956256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-10 00:48:59.956266 | orchestrator | to this access issue: 2025-04-10 00:48:59.956276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-10 00:48:59.956286 | orchestrator | directory 2025-04-10 00:48:59.956296 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 00:48:59.956306 | orchestrator | 2025-04-10 00:48:59.956316 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-10 00:48:59.956326 | orchestrator | Thursday 10 April 2025 00:47:43 +0000 (0:00:00.889) 0:01:27.160 ******** 2025-04-10 00:48:59.956336 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.956346 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.956356 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.956366 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.956376 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.956386 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.956396 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.956406 | orchestrator | 2025-04-10 00:48:59.956416 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-10 00:48:59.956426 | orchestrator | Thursday 10 April 2025 00:47:47 +0000 (0:00:04.873) 0:01:32.033 ******** 2025-04-10 00:48:59.956436 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956477 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956487 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956497 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-10 00:48:59.956507 | orchestrator | 2025-04-10 00:48:59.956517 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-10 00:48:59.956528 | orchestrator | Thursday 10 April 2025 00:47:51 +0000 (0:00:03.571) 0:01:35.604 ******** 2025-04-10 00:48:59.956538 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.956548 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.956558 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.956568 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.956578 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.956594 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.956604 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.956614 | orchestrator | 2025-04-10 00:48:59.956624 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-10 00:48:59.956635 | orchestrator | Thursday 10 April 2025 00:47:54 +0000 (0:00:03.342) 0:01:38.947 ******** 2025-04-10 00:48:59.956650 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956665 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956698 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956714 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956725 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956759 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956783 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956805 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956816 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956867 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956880 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.956901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:48:59.956913 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.956937 | orchestrator | 2025-04-10 00:48:59.956948 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-10 00:48:59.956958 | orchestrator | Thursday 10 April 2025 00:47:58 +0000 (0:00:03.256) 0:01:42.203 ******** 2025-04-10 00:48:59.956969 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.956979 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.956994 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.957004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.957014 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.957024 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.957034 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-10 00:48:59.957044 | orchestrator | 2025-04-10 00:48:59.957054 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-10 00:48:59.957074 | orchestrator | Thursday 10 April 2025 00:48:00 +0000 (0:00:02.735) 0:01:44.939 ******** 2025-04-10 00:48:59.957085 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957125 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957175 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957186 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-10 00:48:59.957196 | orchestrator | 2025-04-10 00:48:59.957206 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-10 00:48:59.957216 | orchestrator | Thursday 10 April 2025 00:48:03 +0000 (0:00:02.696) 0:01:47.636 ******** 2025-04-10 00:48:59.957227 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957259 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957337 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-10 00:48:59.957401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:48:59.957480 | orchestrator | 2025-04-10 00:48:59.957490 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-10 00:48:59.957501 | orchestrator | Thursday 10 April 2025 00:48:07 +0000 (0:00:03.987) 0:01:51.623 ******** 2025-04-10 00:48:59.957511 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.957526 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.957536 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.957546 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.957556 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.957566 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.957576 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.957586 | orchestrator | 2025-04-10 00:48:59.957597 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-10 00:48:59.957607 | orchestrator | Thursday 10 April 2025 00:48:09 +0000 (0:00:01.860) 0:01:53.483 ******** 2025-04-10 00:48:59.957617 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.957632 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.957642 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.957652 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.957662 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.957672 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.957683 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.957693 | orchestrator | 2025-04-10 00:48:59.957703 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957713 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:01.552) 0:01:55.036 ******** 2025-04-10 00:48:59.957723 | orchestrator | 2025-04-10 00:48:59.957733 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957743 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.075) 0:01:55.112 ******** 2025-04-10 00:48:59.957753 | orchestrator | 2025-04-10 00:48:59.957763 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957773 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.063) 0:01:55.176 ******** 2025-04-10 00:48:59.957784 | orchestrator | 2025-04-10 00:48:59.957794 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957804 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.057) 0:01:55.234 ******** 2025-04-10 00:48:59.957814 | orchestrator | 2025-04-10 00:48:59.957824 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957839 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.259) 0:01:55.493 ******** 2025-04-10 00:48:59.957849 | orchestrator | 2025-04-10 00:48:59.957859 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957869 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.069) 0:01:55.562 ******** 2025-04-10 00:48:59.957879 | orchestrator | 2025-04-10 00:48:59.957889 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-10 00:48:59.957899 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.061) 0:01:55.624 ******** 2025-04-10 00:48:59.957909 | orchestrator | 2025-04-10 00:48:59.957934 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-10 00:48:59.957945 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.077) 0:01:55.702 ******** 2025-04-10 00:48:59.957955 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.957965 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.957975 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.957985 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.957995 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.958005 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.958039 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.958052 | orchestrator | 2025-04-10 00:48:59.958062 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-10 00:48:59.958072 | orchestrator | Thursday 10 April 2025 00:48:20 +0000 (0:00:08.538) 0:02:04.240 ******** 2025-04-10 00:48:59.958082 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.958092 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.958102 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.958112 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.958122 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.958132 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.958142 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.958152 | orchestrator | 2025-04-10 00:48:59.958162 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-10 00:48:59.958172 | orchestrator | Thursday 10 April 2025 00:48:43 +0000 (0:00:23.473) 0:02:27.713 ******** 2025-04-10 00:48:59.958182 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:48:59.958192 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:48:59.958202 | orchestrator | ok: [testbed-manager] 2025-04-10 00:48:59.958212 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:48:59.958222 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:48:59.958232 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:48:59.958242 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:48:59.958252 | orchestrator | 2025-04-10 00:48:59.958262 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-10 00:48:59.958272 | orchestrator | Thursday 10 April 2025 00:48:46 +0000 (0:00:02.990) 0:02:30.704 ******** 2025-04-10 00:48:59.958282 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:48:59.958292 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:48:59.958302 | orchestrator | changed: [testbed-manager] 2025-04-10 00:48:59.958312 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:48:59.958322 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:48:59.958332 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:48:59.958342 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:48:59.958352 | orchestrator | 2025-04-10 00:48:59.958362 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:48:59.958373 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958385 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958396 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958417 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958532 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958546 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958557 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 00:48:59.958567 | orchestrator | 2025-04-10 00:48:59.958577 | orchestrator | 2025-04-10 00:48:59.958587 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:48:59.958597 | orchestrator | Thursday 10 April 2025 00:48:56 +0000 (0:00:09.997) 0:02:40.701 ******** 2025-04-10 00:48:59.958608 | orchestrator | =============================================================================== 2025-04-10 00:48:59.958618 | orchestrator | common : Ensure fluentd image is present for label check --------------- 45.47s 2025-04-10 00:48:59.958628 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.47s 2025-04-10 00:48:59.958643 | orchestrator | common : Restart cron container ---------------------------------------- 10.00s 2025-04-10 00:48:59.958654 | orchestrator | common : Restart fluentd container -------------------------------------- 8.54s 2025-04-10 00:48:59.958664 | orchestrator | common : Copying over config.json files for services -------------------- 6.53s 2025-04-10 00:48:59.958674 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.81s 2025-04-10 00:48:59.958684 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.03s 2025-04-10 00:48:59.958694 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.87s 2025-04-10 00:48:59.958704 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.56s 2025-04-10 00:48:59.958714 | orchestrator | common : Check common containers ---------------------------------------- 3.99s 2025-04-10 00:48:59.958724 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.57s 2025-04-10 00:48:59.958734 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.34s 2025-04-10 00:48:59.958744 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.26s 2025-04-10 00:48:59.958754 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.12s 2025-04-10 00:48:59.958764 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.99s 2025-04-10 00:48:59.958774 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.74s 2025-04-10 00:48:59.958784 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.70s 2025-04-10 00:48:59.958794 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.66s 2025-04-10 00:48:59.958804 | orchestrator | common : include_tasks -------------------------------------------------- 2.42s 2025-04-10 00:48:59.958814 | orchestrator | common : Creating log volume -------------------------------------------- 1.86s 2025-04-10 00:48:59.958824 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:48:59.958835 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:48:59.958845 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:48:59.958858 | orchestrator | 2025-04-10 00:48:59 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:02.996410 | orchestrator | 2025-04-10 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:02.996649 | orchestrator | 2025-04-10 00:49:02 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:49:02.996795 | orchestrator | 2025-04-10 00:49:02 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:02.998068 | orchestrator | 2025-04-10 00:49:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:02.998718 | orchestrator | 2025-04-10 00:49:02 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:02.999730 | orchestrator | 2025-04-10 00:49:02 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:03.003974 | orchestrator | 2025-04-10 00:49:03 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:03.004186 | orchestrator | 2025-04-10 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:06.083213 | orchestrator | 2025-04-10 00:49:06 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:49:06.092510 | orchestrator | 2025-04-10 00:49:06 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:06.097236 | orchestrator | 2025-04-10 00:49:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:06.099546 | orchestrator | 2025-04-10 00:49:06 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:06.100999 | orchestrator | 2025-04-10 00:49:06 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:06.101035 | orchestrator | 2025-04-10 00:49:06 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:09.141898 | orchestrator | 2025-04-10 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:09.142274 | orchestrator | 2025-04-10 00:49:09 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:49:09.142368 | orchestrator | 2025-04-10 00:49:09 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:09.142747 | orchestrator | 2025-04-10 00:49:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:09.143602 | orchestrator | 2025-04-10 00:49:09 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:09.144301 | orchestrator | 2025-04-10 00:49:09 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:09.146180 | orchestrator | 2025-04-10 00:49:09 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:09.146472 | orchestrator | 2025-04-10 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:12.195599 | orchestrator | 2025-04-10 00:49:12 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:49:12.197798 | orchestrator | 2025-04-10 00:49:12 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:12.198683 | orchestrator | 2025-04-10 00:49:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:12.200058 | orchestrator | 2025-04-10 00:49:12 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:12.200112 | orchestrator | 2025-04-10 00:49:12 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:12.200905 | orchestrator | 2025-04-10 00:49:12 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:15.236768 | orchestrator | 2025-04-10 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:15.237127 | orchestrator | 2025-04-10 00:49:15 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:49:15.237232 | orchestrator | 2025-04-10 00:49:15 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:15.238949 | orchestrator | 2025-04-10 00:49:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:15.239697 | orchestrator | 2025-04-10 00:49:15 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:15.240670 | orchestrator | 2025-04-10 00:49:15 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:15.242432 | orchestrator | 2025-04-10 00:49:15 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:18.279587 | orchestrator | 2025-04-10 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:18.279740 | orchestrator | 2025-04-10 00:49:18 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state STARTED 2025-04-10 00:49:18.281383 | orchestrator | 2025-04-10 00:49:18 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:18.282481 | orchestrator | 2025-04-10 00:49:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:18.283169 | orchestrator | 2025-04-10 00:49:18 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:18.284095 | orchestrator | 2025-04-10 00:49:18 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:18.285035 | orchestrator | 2025-04-10 00:49:18 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:21.343448 | orchestrator | 2025-04-10 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:21.343586 | orchestrator | 2025-04-10 00:49:21 | INFO  | Task cd779216-68fb-4512-af58-1eb0272f0daf is in state SUCCESS 2025-04-10 00:49:21.345441 | orchestrator | 2025-04-10 00:49:21 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:21.346412 | orchestrator | 2025-04-10 00:49:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:21.347229 | orchestrator | 2025-04-10 00:49:21 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:21.348595 | orchestrator | 2025-04-10 00:49:21 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:21.352223 | orchestrator | 2025-04-10 00:49:21 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:24.396972 | orchestrator | 2025-04-10 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:24.397143 | orchestrator | 2025-04-10 00:49:24 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:24.398467 | orchestrator | 2025-04-10 00:49:24 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:24.398507 | orchestrator | 2025-04-10 00:49:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:24.400379 | orchestrator | 2025-04-10 00:49:24 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:24.401055 | orchestrator | 2025-04-10 00:49:24 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:24.401822 | orchestrator | 2025-04-10 00:49:24 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:24.403172 | orchestrator | 2025-04-10 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:27.441811 | orchestrator | 2025-04-10 00:49:27 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:27.442895 | orchestrator | 2025-04-10 00:49:27 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:27.444439 | orchestrator | 2025-04-10 00:49:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:27.446438 | orchestrator | 2025-04-10 00:49:27 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:27.449202 | orchestrator | 2025-04-10 00:49:27 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:27.449981 | orchestrator | 2025-04-10 00:49:27 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:27.450252 | orchestrator | 2025-04-10 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:30.506317 | orchestrator | 2025-04-10 00:49:30 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:30.506523 | orchestrator | 2025-04-10 00:49:30 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:30.507401 | orchestrator | 2025-04-10 00:49:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:30.508052 | orchestrator | 2025-04-10 00:49:30 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:30.508825 | orchestrator | 2025-04-10 00:49:30 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:30.509541 | orchestrator | 2025-04-10 00:49:30 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:30.509689 | orchestrator | 2025-04-10 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:33.543180 | orchestrator | 2025-04-10 00:49:33 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:33.544190 | orchestrator | 2025-04-10 00:49:33 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:33.546204 | orchestrator | 2025-04-10 00:49:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:33.547475 | orchestrator | 2025-04-10 00:49:33 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:33.548520 | orchestrator | 2025-04-10 00:49:33 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:33.548579 | orchestrator | 2025-04-10 00:49:33 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:33.548635 | orchestrator | 2025-04-10 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:36.584758 | orchestrator | 2025-04-10 00:49:36 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:36.585362 | orchestrator | 2025-04-10 00:49:36 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:36.587670 | orchestrator | 2025-04-10 00:49:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:36.588730 | orchestrator | 2025-04-10 00:49:36 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state STARTED 2025-04-10 00:49:36.589273 | orchestrator | 2025-04-10 00:49:36 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:36.594764 | orchestrator | 2025-04-10 00:49:36 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:39.644203 | orchestrator | 2025-04-10 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:39.644345 | orchestrator | 2025-04-10 00:49:39 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:39.645987 | orchestrator | 2025-04-10 00:49:39 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:39.647530 | orchestrator | 2025-04-10 00:49:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:39.649647 | orchestrator | 2025-04-10 00:49:39 | INFO  | Task 4f632890-bf87-4a8a-9c7e-b709c3e31552 is in state SUCCESS 2025-04-10 00:49:39.651361 | orchestrator | 2025-04-10 00:49:39.651402 | orchestrator | 2025-04-10 00:49:39.651418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:49:39.651434 | orchestrator | 2025-04-10 00:49:39.651450 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:49:39.651465 | orchestrator | Thursday 10 April 2025 00:49:01 +0000 (0:00:00.349) 0:00:00.349 ******** 2025-04-10 00:49:39.651480 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:49:39.651497 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:49:39.651512 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:49:39.651527 | orchestrator | 2025-04-10 00:49:39.651542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:49:39.651557 | orchestrator | Thursday 10 April 2025 00:49:02 +0000 (0:00:00.694) 0:00:01.043 ******** 2025-04-10 00:49:39.651572 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-10 00:49:39.651587 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-10 00:49:39.651602 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-10 00:49:39.651617 | orchestrator | 2025-04-10 00:49:39.651632 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-10 00:49:39.651646 | orchestrator | 2025-04-10 00:49:39.651661 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-10 00:49:39.651676 | orchestrator | Thursday 10 April 2025 00:49:02 +0000 (0:00:00.727) 0:00:01.771 ******** 2025-04-10 00:49:39.651691 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:49:39.651706 | orchestrator | 2025-04-10 00:49:39.651721 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-10 00:49:39.651736 | orchestrator | Thursday 10 April 2025 00:49:03 +0000 (0:00:01.002) 0:00:02.773 ******** 2025-04-10 00:49:39.651750 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-10 00:49:39.651766 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-10 00:49:39.651780 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-10 00:49:39.651795 | orchestrator | 2025-04-10 00:49:39.651810 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-10 00:49:39.651825 | orchestrator | Thursday 10 April 2025 00:49:04 +0000 (0:00:01.165) 0:00:03.939 ******** 2025-04-10 00:49:39.651840 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-10 00:49:39.651855 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-10 00:49:39.651870 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-10 00:49:39.651884 | orchestrator | 2025-04-10 00:49:39.651899 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-10 00:49:39.651937 | orchestrator | Thursday 10 April 2025 00:49:07 +0000 (0:00:02.552) 0:00:06.491 ******** 2025-04-10 00:49:39.651952 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:49:39.651983 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:49:39.652001 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:49:39.652017 | orchestrator | 2025-04-10 00:49:39.652037 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-10 00:49:39.652053 | orchestrator | Thursday 10 April 2025 00:49:10 +0000 (0:00:03.265) 0:00:09.757 ******** 2025-04-10 00:49:39.652069 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:49:39.652085 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:49:39.652100 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:49:39.652135 | orchestrator | 2025-04-10 00:49:39.652151 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:49:39.652168 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:49:39.652185 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:49:39.652202 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:49:39.652218 | orchestrator | 2025-04-10 00:49:39.652233 | orchestrator | 2025-04-10 00:49:39.652249 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:49:39.652265 | orchestrator | Thursday 10 April 2025 00:49:20 +0000 (0:00:09.290) 0:00:19.047 ******** 2025-04-10 00:49:39.652281 | orchestrator | =============================================================================== 2025-04-10 00:49:39.652296 | orchestrator | memcached : Restart memcached container --------------------------------- 9.29s 2025-04-10 00:49:39.652312 | orchestrator | memcached : Check memcached container ----------------------------------- 3.27s 2025-04-10 00:49:39.652328 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.55s 2025-04-10 00:49:39.652343 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.17s 2025-04-10 00:49:39.652357 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.00s 2025-04-10 00:49:39.652371 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-04-10 00:49:39.652385 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-04-10 00:49:39.652399 | orchestrator | 2025-04-10 00:49:39.652413 | orchestrator | 2025-04-10 00:49:39.652427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:49:39.652441 | orchestrator | 2025-04-10 00:49:39.652455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:49:39.652469 | orchestrator | Thursday 10 April 2025 00:49:02 +0000 (0:00:00.511) 0:00:00.511 ******** 2025-04-10 00:49:39.652483 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:49:39.652497 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:49:39.652511 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:49:39.652525 | orchestrator | 2025-04-10 00:49:39.652539 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:49:39.652565 | orchestrator | Thursday 10 April 2025 00:49:03 +0000 (0:00:00.533) 0:00:01.044 ******** 2025-04-10 00:49:39.652580 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-10 00:49:39.652594 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-10 00:49:39.652609 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-10 00:49:39.652623 | orchestrator | 2025-04-10 00:49:39.652637 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-10 00:49:39.652650 | orchestrator | 2025-04-10 00:49:39.652665 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-10 00:49:39.652679 | orchestrator | Thursday 10 April 2025 00:49:03 +0000 (0:00:00.382) 0:00:01.428 ******** 2025-04-10 00:49:39.652693 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:49:39.652707 | orchestrator | 2025-04-10 00:49:39.652721 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-10 00:49:39.652735 | orchestrator | Thursday 10 April 2025 00:49:04 +0000 (0:00:01.003) 0:00:02.431 ******** 2025-04-10 00:49:39.652752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652871 | orchestrator | 2025-04-10 00:49:39.652885 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-10 00:49:39.652900 | orchestrator | Thursday 10 April 2025 00:49:07 +0000 (0:00:02.397) 0:00:04.829 ******** 2025-04-10 00:49:39.652951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.652990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653074 | orchestrator | 2025-04-10 00:49:39.653089 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-10 00:49:39.653103 | orchestrator | Thursday 10 April 2025 00:49:11 +0000 (0:00:03.832) 0:00:08.661 ******** 2025-04-10 00:49:39.653117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653220 | orchestrator | 2025-04-10 00:49:39.653235 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-10 00:49:39.653256 | orchestrator | Thursday 10 April 2025 00:49:16 +0000 (0:00:05.032) 0:00:13.694 ******** 2025-04-10 00:49:39.653270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.653351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-10 00:49:39.654111 | orchestrator | 2025-04-10 00:49:39.654142 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-10 00:49:39.654157 | orchestrator | Thursday 10 April 2025 00:49:18 +0000 (0:00:02.293) 0:00:15.987 ******** 2025-04-10 00:49:39.654172 | orchestrator | 2025-04-10 00:49:39.654187 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-10 00:49:39.654203 | orchestrator | Thursday 10 April 2025 00:49:18 +0000 (0:00:00.123) 0:00:16.110 ******** 2025-04-10 00:49:39.654218 | orchestrator | 2025-04-10 00:49:39.654232 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-10 00:49:39.654247 | orchestrator | Thursday 10 April 2025 00:49:18 +0000 (0:00:00.108) 0:00:16.219 ******** 2025-04-10 00:49:39.654262 | orchestrator | 2025-04-10 00:49:39.654276 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-10 00:49:39.654291 | orchestrator | Thursday 10 April 2025 00:49:18 +0000 (0:00:00.250) 0:00:16.469 ******** 2025-04-10 00:49:39.654306 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:49:39.654321 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:49:39.654336 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:49:39.654351 | orchestrator | 2025-04-10 00:49:39.654365 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-10 00:49:39.654380 | orchestrator | Thursday 10 April 2025 00:49:27 +0000 (0:00:08.613) 0:00:25.083 ******** 2025-04-10 00:49:39.654395 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:49:39.654410 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:49:39.654433 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:49:39.654448 | orchestrator | 2025-04-10 00:49:39.654463 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:49:39.654477 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:49:39.654493 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:49:39.654508 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:49:39.654523 | orchestrator | 2025-04-10 00:49:39.654538 | orchestrator | 2025-04-10 00:49:39.654553 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:49:39.654568 | orchestrator | Thursday 10 April 2025 00:49:37 +0000 (0:00:10.440) 0:00:35.523 ******** 2025-04-10 00:49:39.654583 | orchestrator | =============================================================================== 2025-04-10 00:49:39.654598 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.44s 2025-04-10 00:49:39.654612 | orchestrator | redis : Restart redis container ----------------------------------------- 8.61s 2025-04-10 00:49:39.654627 | orchestrator | redis : Copying over redis config files --------------------------------- 5.03s 2025-04-10 00:49:39.654642 | orchestrator | redis : Copying over default config.json files -------------------------- 3.83s 2025-04-10 00:49:39.654656 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.40s 2025-04-10 00:49:39.654671 | orchestrator | redis : Check redis containers ------------------------------------------ 2.29s 2025-04-10 00:49:39.654685 | orchestrator | redis : include_tasks --------------------------------------------------- 1.00s 2025-04-10 00:49:39.654700 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-04-10 00:49:39.654715 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.48s 2025-04-10 00:49:39.654729 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-04-10 00:49:39.654745 | orchestrator | 2025-04-10 00:49:39 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:39.654769 | orchestrator | 2025-04-10 00:49:39 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:42.690296 | orchestrator | 2025-04-10 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:42.690438 | orchestrator | 2025-04-10 00:49:42 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:42.691198 | orchestrator | 2025-04-10 00:49:42 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:42.692544 | orchestrator | 2025-04-10 00:49:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:42.693565 | orchestrator | 2025-04-10 00:49:42 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:42.697994 | orchestrator | 2025-04-10 00:49:42 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:45.737707 | orchestrator | 2025-04-10 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:45.737852 | orchestrator | 2025-04-10 00:49:45 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:45.738347 | orchestrator | 2025-04-10 00:49:45 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:45.738386 | orchestrator | 2025-04-10 00:49:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:45.739758 | orchestrator | 2025-04-10 00:49:45 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:45.739901 | orchestrator | 2025-04-10 00:49:45 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:48.822546 | orchestrator | 2025-04-10 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:48.822687 | orchestrator | 2025-04-10 00:49:48 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:48.824466 | orchestrator | 2025-04-10 00:49:48 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:48.826111 | orchestrator | 2025-04-10 00:49:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:48.828484 | orchestrator | 2025-04-10 00:49:48 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:48.828612 | orchestrator | 2025-04-10 00:49:48 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:48.828672 | orchestrator | 2025-04-10 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:51.871199 | orchestrator | 2025-04-10 00:49:51 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:51.872258 | orchestrator | 2025-04-10 00:49:51 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:51.873071 | orchestrator | 2025-04-10 00:49:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:51.873105 | orchestrator | 2025-04-10 00:49:51 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:51.873605 | orchestrator | 2025-04-10 00:49:51 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:51.873824 | orchestrator | 2025-04-10 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:54.929260 | orchestrator | 2025-04-10 00:49:54 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:54.930272 | orchestrator | 2025-04-10 00:49:54 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:54.931154 | orchestrator | 2025-04-10 00:49:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:54.931226 | orchestrator | 2025-04-10 00:49:54 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:54.932030 | orchestrator | 2025-04-10 00:49:54 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:57.981737 | orchestrator | 2025-04-10 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:49:57.981891 | orchestrator | 2025-04-10 00:49:57 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:49:57.982534 | orchestrator | 2025-04-10 00:49:57 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:49:57.983768 | orchestrator | 2025-04-10 00:49:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:49:57.985829 | orchestrator | 2025-04-10 00:49:57 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:49:57.986815 | orchestrator | 2025-04-10 00:49:57 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:49:57.987043 | orchestrator | 2025-04-10 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:01.037837 | orchestrator | 2025-04-10 00:50:01 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:01.038345 | orchestrator | 2025-04-10 00:50:01 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:01.039601 | orchestrator | 2025-04-10 00:50:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:01.040240 | orchestrator | 2025-04-10 00:50:01 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:01.041116 | orchestrator | 2025-04-10 00:50:01 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:04.076070 | orchestrator | 2025-04-10 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:04.076215 | orchestrator | 2025-04-10 00:50:04 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:04.076602 | orchestrator | 2025-04-10 00:50:04 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:04.077798 | orchestrator | 2025-04-10 00:50:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:04.079400 | orchestrator | 2025-04-10 00:50:04 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:04.084717 | orchestrator | 2025-04-10 00:50:04 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:07.123219 | orchestrator | 2025-04-10 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:07.123367 | orchestrator | 2025-04-10 00:50:07 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:07.124798 | orchestrator | 2025-04-10 00:50:07 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:07.126158 | orchestrator | 2025-04-10 00:50:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:07.127430 | orchestrator | 2025-04-10 00:50:07 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:07.128695 | orchestrator | 2025-04-10 00:50:07 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:10.178829 | orchestrator | 2025-04-10 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:10.179008 | orchestrator | 2025-04-10 00:50:10 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:10.180607 | orchestrator | 2025-04-10 00:50:10 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:10.181600 | orchestrator | 2025-04-10 00:50:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:10.182472 | orchestrator | 2025-04-10 00:50:10 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:10.188518 | orchestrator | 2025-04-10 00:50:10 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:13.226167 | orchestrator | 2025-04-10 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:13.226313 | orchestrator | 2025-04-10 00:50:13 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:13.227747 | orchestrator | 2025-04-10 00:50:13 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:13.231135 | orchestrator | 2025-04-10 00:50:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:13.232473 | orchestrator | 2025-04-10 00:50:13 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:16.284731 | orchestrator | 2025-04-10 00:50:13 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:16.284853 | orchestrator | 2025-04-10 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:16.284890 | orchestrator | 2025-04-10 00:50:16 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:16.292336 | orchestrator | 2025-04-10 00:50:16 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:16.292830 | orchestrator | 2025-04-10 00:50:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:16.293711 | orchestrator | 2025-04-10 00:50:16 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:16.294495 | orchestrator | 2025-04-10 00:50:16 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:19.351862 | orchestrator | 2025-04-10 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:19.352068 | orchestrator | 2025-04-10 00:50:19 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:19.354855 | orchestrator | 2025-04-10 00:50:19 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:19.356593 | orchestrator | 2025-04-10 00:50:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:19.357771 | orchestrator | 2025-04-10 00:50:19 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:19.359095 | orchestrator | 2025-04-10 00:50:19 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:22.401591 | orchestrator | 2025-04-10 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:22.401757 | orchestrator | 2025-04-10 00:50:22 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:22.401843 | orchestrator | 2025-04-10 00:50:22 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:22.402423 | orchestrator | 2025-04-10 00:50:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:22.403445 | orchestrator | 2025-04-10 00:50:22 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:22.404096 | orchestrator | 2025-04-10 00:50:22 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state STARTED 2025-04-10 00:50:25.452337 | orchestrator | 2025-04-10 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:25.452506 | orchestrator | 2025-04-10 00:50:25 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:25.459185 | orchestrator | 2025-04-10 00:50:25 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:25.460301 | orchestrator | 2025-04-10 00:50:25 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:25.464404 | orchestrator | 2025-04-10 00:50:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:25.468168 | orchestrator | 2025-04-10 00:50:25 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:25.469372 | orchestrator | 2025-04-10 00:50:25 | INFO  | Task 0474de3d-1c79-4d59-b7a0-9f02c4a61bdb is in state SUCCESS 2025-04-10 00:50:25.472111 | orchestrator | 2025-04-10 00:50:25.472198 | orchestrator | 2025-04-10 00:50:25.472229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:50:25.472255 | orchestrator | 2025-04-10 00:50:25.472279 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:50:25.472302 | orchestrator | Thursday 10 April 2025 00:49:03 +0000 (0:00:00.613) 0:00:00.613 ******** 2025-04-10 00:50:25.472327 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:50:25.472354 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:50:25.472379 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:50:25.472405 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:50:25.472430 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:50:25.472453 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:50:25.472476 | orchestrator | 2025-04-10 00:50:25.472502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:50:25.472526 | orchestrator | Thursday 10 April 2025 00:49:04 +0000 (0:00:01.022) 0:00:01.636 ******** 2025-04-10 00:50:25.472549 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-10 00:50:25.472572 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-10 00:50:25.472595 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-10 00:50:25.472620 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-10 00:50:25.472646 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-10 00:50:25.472687 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-10 00:50:25.472719 | orchestrator | 2025-04-10 00:50:25.472744 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-10 00:50:25.472769 | orchestrator | 2025-04-10 00:50:25.472795 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-10 00:50:25.472820 | orchestrator | Thursday 10 April 2025 00:49:05 +0000 (0:00:01.422) 0:00:03.058 ******** 2025-04-10 00:50:25.472845 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 00:50:25.472874 | orchestrator | 2025-04-10 00:50:25.472902 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-10 00:50:25.472991 | orchestrator | Thursday 10 April 2025 00:49:08 +0000 (0:00:03.315) 0:00:06.374 ******** 2025-04-10 00:50:25.473017 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-10 00:50:25.473042 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-10 00:50:25.473067 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-10 00:50:25.473091 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-10 00:50:25.473116 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-10 00:50:25.473140 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-10 00:50:25.473165 | orchestrator | 2025-04-10 00:50:25.473215 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-10 00:50:25.473241 | orchestrator | Thursday 10 April 2025 00:49:10 +0000 (0:00:01.570) 0:00:07.946 ******** 2025-04-10 00:50:25.473268 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-10 00:50:25.473307 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-10 00:50:25.473334 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-10 00:50:25.473361 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-10 00:50:25.473388 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-10 00:50:25.473415 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-10 00:50:25.473442 | orchestrator | 2025-04-10 00:50:25.473467 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-10 00:50:25.473493 | orchestrator | Thursday 10 April 2025 00:49:14 +0000 (0:00:03.987) 0:00:11.933 ******** 2025-04-10 00:50:25.473521 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-10 00:50:25.473548 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:50:25.473578 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-10 00:50:25.473606 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-10 00:50:25.473634 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:50:25.473660 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-10 00:50:25.473686 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:50:25.473712 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-10 00:50:25.473739 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:50:25.473765 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:50:25.473793 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-10 00:50:25.473820 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:50:25.473848 | orchestrator | 2025-04-10 00:50:25.473874 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-10 00:50:25.473902 | orchestrator | Thursday 10 April 2025 00:49:16 +0000 (0:00:01.749) 0:00:13.683 ******** 2025-04-10 00:50:25.474003 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:50:25.474167 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:50:25.474194 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:50:25.474220 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:50:25.474247 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:50:25.474275 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:50:25.474302 | orchestrator | 2025-04-10 00:50:25.474331 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-10 00:50:25.474357 | orchestrator | Thursday 10 April 2025 00:49:16 +0000 (0:00:00.711) 0:00:14.395 ******** 2025-04-10 00:50:25.474409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474800 | orchestrator | 2025-04-10 00:50:25.474824 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-10 00:50:25.474847 | orchestrator | Thursday 10 April 2025 00:49:19 +0000 (0:00:02.534) 0:00:16.929 ******** 2025-04-10 00:50:25.474870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.474983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.475292 | orchestrator | 2025-04-10 00:50:25.475314 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-10 00:50:25.475336 | orchestrator | Thursday 10 April 2025 00:49:23 +0000 (0:00:04.024) 0:00:20.954 ******** 2025-04-10 00:50:25.475357 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:50:25.475378 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:50:25.475399 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:50:25.475420 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:50:25.475441 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:50:25.475463 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:50:25.475485 | orchestrator | 2025-04-10 00:50:25.475509 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-10 00:50:25.475532 | orchestrator | Thursday 10 April 2025 00:49:26 +0000 (0:00:03.144) 0:00:24.099 ******** 2025-04-10 00:50:25.475552 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:50:25.475573 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:50:25.475593 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:50:25.475614 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:50:25.475634 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:50:25.475659 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:50:25.475682 | orchestrator | 2025-04-10 00:50:25.475704 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-10 00:50:25.475726 | orchestrator | Thursday 10 April 2025 00:49:29 +0000 (0:00:03.151) 0:00:27.251 ******** 2025-04-10 00:50:25.475749 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:50:25.475771 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:50:25.475792 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:50:25.475813 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:50:25.475835 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:50:25.475858 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:50:25.475880 | orchestrator | 2025-04-10 00:50:25.475929 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-10 00:50:25.475958 | orchestrator | Thursday 10 April 2025 00:49:31 +0000 (0:00:01.710) 0:00:28.961 ******** 2025-04-10 00:50:25.475984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-10 00:50:25.476435 | orchestrator | 2025-04-10 00:50:25.476456 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-10 00:50:25.476479 | orchestrator | Thursday 10 April 2025 00:49:34 +0000 (0:00:03.272) 0:00:32.233 ******** 2025-04-10 00:50:25.476503 | orchestrator | 2025-04-10 00:50:25.476525 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-10 00:50:25.476547 | orchestrator | Thursday 10 April 2025 00:49:34 +0000 (0:00:00.121) 0:00:32.355 ******** 2025-04-10 00:50:25.476567 | orchestrator | 2025-04-10 00:50:25.476589 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-10 00:50:25.476625 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:00.418) 0:00:32.774 ******** 2025-04-10 00:50:25.476649 | orchestrator | 2025-04-10 00:50:25.476671 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-10 00:50:25.476692 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:00.133) 0:00:32.907 ******** 2025-04-10 00:50:25.476715 | orchestrator | 2025-04-10 00:50:25.476746 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-10 00:50:25.476769 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:00.404) 0:00:33.312 ******** 2025-04-10 00:50:25.476791 | orchestrator | 2025-04-10 00:50:25.476813 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-10 00:50:25.476834 | orchestrator | Thursday 10 April 2025 00:49:36 +0000 (0:00:00.261) 0:00:33.573 ******** 2025-04-10 00:50:25.476857 | orchestrator | 2025-04-10 00:50:25.476879 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-10 00:50:25.476900 | orchestrator | Thursday 10 April 2025 00:49:36 +0000 (0:00:00.361) 0:00:33.934 ******** 2025-04-10 00:50:25.477010 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:50:25.477036 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:50:25.477060 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:50:25.477081 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:50:25.477100 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:50:25.477121 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:50:25.477141 | orchestrator | 2025-04-10 00:50:25.477161 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-10 00:50:25.477181 | orchestrator | Thursday 10 April 2025 00:49:46 +0000 (0:00:10.028) 0:00:43.963 ******** 2025-04-10 00:50:25.477212 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:50:25.477233 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:50:25.477253 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:50:25.477271 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:50:25.477289 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:50:25.477309 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:50:25.477328 | orchestrator | 2025-04-10 00:50:25.477348 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-10 00:50:25.477367 | orchestrator | Thursday 10 April 2025 00:49:48 +0000 (0:00:02.335) 0:00:46.299 ******** 2025-04-10 00:50:25.477386 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:50:25.477406 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:50:25.477426 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:50:25.477458 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:50:25.477478 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:50:25.477497 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:50:25.477516 | orchestrator | 2025-04-10 00:50:25.477535 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-10 00:50:25.477555 | orchestrator | Thursday 10 April 2025 00:49:59 +0000 (0:00:11.147) 0:00:57.446 ******** 2025-04-10 00:50:25.477573 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-10 00:50:25.477593 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-10 00:50:25.477612 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-10 00:50:25.477632 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-10 00:50:25.477652 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-10 00:50:25.477672 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-10 00:50:25.477693 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-10 00:50:25.477726 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-10 00:50:25.477745 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-10 00:50:25.477763 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-10 00:50:25.477782 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-10 00:50:25.477802 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-10 00:50:25.477822 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-10 00:50:25.477842 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-10 00:50:25.477868 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-10 00:50:25.477888 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-10 00:50:25.477934 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-10 00:50:25.477955 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-10 00:50:25.477974 | orchestrator | 2025-04-10 00:50:25.477993 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-10 00:50:25.478052 | orchestrator | Thursday 10 April 2025 00:50:08 +0000 (0:00:08.425) 0:01:05.872 ******** 2025-04-10 00:50:25.478080 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-10 00:50:25.478101 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:50:25.478121 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-10 00:50:25.478140 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:50:25.478160 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-10 00:50:25.478180 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:50:25.478200 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-10 00:50:25.478219 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-10 00:50:25.478238 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-10 00:50:25.478257 | orchestrator | 2025-04-10 00:50:25.478275 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-10 00:50:25.478294 | orchestrator | Thursday 10 April 2025 00:50:10 +0000 (0:00:02.377) 0:01:08.250 ******** 2025-04-10 00:50:25.478313 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-10 00:50:25.478332 | orchestrator | skipping: [testbed-node-3] 2025-04-10 00:50:25.478352 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-10 00:50:25.478372 | orchestrator | skipping: [testbed-node-4] 2025-04-10 00:50:25.478392 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-10 00:50:25.478411 | orchestrator | skipping: [testbed-node-5] 2025-04-10 00:50:25.478431 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-10 00:50:25.478465 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-10 00:50:28.508193 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-10 00:50:28.508361 | orchestrator | 2025-04-10 00:50:28.508384 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-10 00:50:28.508400 | orchestrator | Thursday 10 April 2025 00:50:14 +0000 (0:00:03.984) 0:01:12.234 ******** 2025-04-10 00:50:28.508415 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:50:28.508430 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:50:28.508445 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:50:28.508459 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:50:28.508516 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:50:28.508531 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:50:28.508545 | orchestrator | 2025-04-10 00:50:28.508560 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:50:28.508575 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:50:28.508592 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:50:28.508606 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:50:28.508620 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:50:28.508634 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:50:28.508666 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:50:28.508683 | orchestrator | 2025-04-10 00:50:28.508698 | orchestrator | 2025-04-10 00:50:28.508714 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:50:28.508731 | orchestrator | Thursday 10 April 2025 00:50:23 +0000 (0:00:09.083) 0:01:21.318 ******** 2025-04-10 00:50:28.508747 | orchestrator | =============================================================================== 2025-04-10 00:50:28.508763 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.23s 2025-04-10 00:50:28.508779 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.03s 2025-04-10 00:50:28.508794 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.43s 2025-04-10 00:50:28.508809 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.02s 2025-04-10 00:50:28.508825 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.99s 2025-04-10 00:50:28.508840 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.98s 2025-04-10 00:50:28.508855 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.32s 2025-04-10 00:50:28.508870 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.27s 2025-04-10 00:50:28.508885 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.15s 2025-04-10 00:50:28.508901 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.14s 2025-04-10 00:50:28.508947 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.53s 2025-04-10 00:50:28.508963 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.38s 2025-04-10 00:50:28.508979 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.34s 2025-04-10 00:50:28.508995 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.75s 2025-04-10 00:50:28.509011 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.71s 2025-04-10 00:50:28.509026 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.70s 2025-04-10 00:50:28.509040 | orchestrator | module-load : Load modules ---------------------------------------------- 1.57s 2025-04-10 00:50:28.509054 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.42s 2025-04-10 00:50:28.509068 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2025-04-10 00:50:28.509082 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.71s 2025-04-10 00:50:28.509096 | orchestrator | 2025-04-10 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:28.509141 | orchestrator | 2025-04-10 00:50:28 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:28.510127 | orchestrator | 2025-04-10 00:50:28 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:28.510283 | orchestrator | 2025-04-10 00:50:28 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:28.511426 | orchestrator | 2025-04-10 00:50:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:28.512447 | orchestrator | 2025-04-10 00:50:28 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:28.512667 | orchestrator | 2025-04-10 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:31.551403 | orchestrator | 2025-04-10 00:50:31 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:31.551698 | orchestrator | 2025-04-10 00:50:31 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:31.553532 | orchestrator | 2025-04-10 00:50:31 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:31.554275 | orchestrator | 2025-04-10 00:50:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:31.555265 | orchestrator | 2025-04-10 00:50:31 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:34.590853 | orchestrator | 2025-04-10 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:34.591122 | orchestrator | 2025-04-10 00:50:34 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:34.591757 | orchestrator | 2025-04-10 00:50:34 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:34.592268 | orchestrator | 2025-04-10 00:50:34 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:34.592310 | orchestrator | 2025-04-10 00:50:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:34.595673 | orchestrator | 2025-04-10 00:50:34 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:37.638165 | orchestrator | 2025-04-10 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:37.638325 | orchestrator | 2025-04-10 00:50:37 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:37.640268 | orchestrator | 2025-04-10 00:50:37 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:37.641216 | orchestrator | 2025-04-10 00:50:37 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:37.645031 | orchestrator | 2025-04-10 00:50:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:37.646299 | orchestrator | 2025-04-10 00:50:37 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:37.646383 | orchestrator | 2025-04-10 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:40.693203 | orchestrator | 2025-04-10 00:50:40 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:40.695161 | orchestrator | 2025-04-10 00:50:40 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:40.699127 | orchestrator | 2025-04-10 00:50:40 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:40.699661 | orchestrator | 2025-04-10 00:50:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:40.700331 | orchestrator | 2025-04-10 00:50:40 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:40.701068 | orchestrator | 2025-04-10 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:43.738314 | orchestrator | 2025-04-10 00:50:43 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:43.739417 | orchestrator | 2025-04-10 00:50:43 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:43.741352 | orchestrator | 2025-04-10 00:50:43 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:43.741501 | orchestrator | 2025-04-10 00:50:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:43.742013 | orchestrator | 2025-04-10 00:50:43 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:46.787998 | orchestrator | 2025-04-10 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:46.788142 | orchestrator | 2025-04-10 00:50:46 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:46.789190 | orchestrator | 2025-04-10 00:50:46 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:46.789807 | orchestrator | 2025-04-10 00:50:46 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:46.791929 | orchestrator | 2025-04-10 00:50:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:46.792622 | orchestrator | 2025-04-10 00:50:46 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:49.845750 | orchestrator | 2025-04-10 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:49.845893 | orchestrator | 2025-04-10 00:50:49 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:49.847728 | orchestrator | 2025-04-10 00:50:49 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:49.850342 | orchestrator | 2025-04-10 00:50:49 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:49.852328 | orchestrator | 2025-04-10 00:50:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:49.854899 | orchestrator | 2025-04-10 00:50:49 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:52.899434 | orchestrator | 2025-04-10 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:52.899649 | orchestrator | 2025-04-10 00:50:52 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:52.901253 | orchestrator | 2025-04-10 00:50:52 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:52.902197 | orchestrator | 2025-04-10 00:50:52 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:52.903116 | orchestrator | 2025-04-10 00:50:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:52.903968 | orchestrator | 2025-04-10 00:50:52 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:52.904397 | orchestrator | 2025-04-10 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:55.947862 | orchestrator | 2025-04-10 00:50:55 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:55.949363 | orchestrator | 2025-04-10 00:50:55 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:55.951846 | orchestrator | 2025-04-10 00:50:55 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:55.954106 | orchestrator | 2025-04-10 00:50:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:55.956167 | orchestrator | 2025-04-10 00:50:55 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:50:59.003626 | orchestrator | 2025-04-10 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:50:59.003774 | orchestrator | 2025-04-10 00:50:59 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:50:59.004049 | orchestrator | 2025-04-10 00:50:59 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:50:59.004079 | orchestrator | 2025-04-10 00:50:59 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:50:59.004100 | orchestrator | 2025-04-10 00:50:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:50:59.005476 | orchestrator | 2025-04-10 00:50:59 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:02.061171 | orchestrator | 2025-04-10 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:02.061312 | orchestrator | 2025-04-10 00:51:02 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:02.061839 | orchestrator | 2025-04-10 00:51:02 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:02.062867 | orchestrator | 2025-04-10 00:51:02 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:02.064551 | orchestrator | 2025-04-10 00:51:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:02.065921 | orchestrator | 2025-04-10 00:51:02 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:05.125175 | orchestrator | 2025-04-10 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:05.125276 | orchestrator | 2025-04-10 00:51:05 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:05.125802 | orchestrator | 2025-04-10 00:51:05 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:05.125819 | orchestrator | 2025-04-10 00:51:05 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:05.126383 | orchestrator | 2025-04-10 00:51:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:05.127238 | orchestrator | 2025-04-10 00:51:05 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:05.127480 | orchestrator | 2025-04-10 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:08.159346 | orchestrator | 2025-04-10 00:51:08 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:08.160114 | orchestrator | 2025-04-10 00:51:08 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:08.161076 | orchestrator | 2025-04-10 00:51:08 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:08.161955 | orchestrator | 2025-04-10 00:51:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:08.163522 | orchestrator | 2025-04-10 00:51:08 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:11.209462 | orchestrator | 2025-04-10 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:11.209565 | orchestrator | 2025-04-10 00:51:11 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:11.213849 | orchestrator | 2025-04-10 00:51:11 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:11.214362 | orchestrator | 2025-04-10 00:51:11 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:11.215733 | orchestrator | 2025-04-10 00:51:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:11.219564 | orchestrator | 2025-04-10 00:51:11 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:14.289182 | orchestrator | 2025-04-10 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:14.289321 | orchestrator | 2025-04-10 00:51:14 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:14.289624 | orchestrator | 2025-04-10 00:51:14 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:14.289658 | orchestrator | 2025-04-10 00:51:14 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:14.291141 | orchestrator | 2025-04-10 00:51:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:14.292327 | orchestrator | 2025-04-10 00:51:14 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:17.335943 | orchestrator | 2025-04-10 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:17.336050 | orchestrator | 2025-04-10 00:51:17 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:17.337698 | orchestrator | 2025-04-10 00:51:17 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:17.339602 | orchestrator | 2025-04-10 00:51:17 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:17.342143 | orchestrator | 2025-04-10 00:51:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:17.343433 | orchestrator | 2025-04-10 00:51:17 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:17.344218 | orchestrator | 2025-04-10 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:20.384854 | orchestrator | 2025-04-10 00:51:20 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:20.385736 | orchestrator | 2025-04-10 00:51:20 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:20.391635 | orchestrator | 2025-04-10 00:51:20 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:20.392979 | orchestrator | 2025-04-10 00:51:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:20.394631 | orchestrator | 2025-04-10 00:51:20 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:20.395587 | orchestrator | 2025-04-10 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:23.441499 | orchestrator | 2025-04-10 00:51:23 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:23.441972 | orchestrator | 2025-04-10 00:51:23 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:23.442833 | orchestrator | 2025-04-10 00:51:23 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:23.444885 | orchestrator | 2025-04-10 00:51:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:23.445585 | orchestrator | 2025-04-10 00:51:23 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:23.445839 | orchestrator | 2025-04-10 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:26.501669 | orchestrator | 2025-04-10 00:51:26 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:26.504603 | orchestrator | 2025-04-10 00:51:26 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:26.506262 | orchestrator | 2025-04-10 00:51:26 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:26.507713 | orchestrator | 2025-04-10 00:51:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:26.510236 | orchestrator | 2025-04-10 00:51:26 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:29.555058 | orchestrator | 2025-04-10 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:29.555285 | orchestrator | 2025-04-10 00:51:29 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:32.592494 | orchestrator | 2025-04-10 00:51:29 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:32.592648 | orchestrator | 2025-04-10 00:51:29 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:32.592667 | orchestrator | 2025-04-10 00:51:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:32.592682 | orchestrator | 2025-04-10 00:51:29 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:32.592696 | orchestrator | 2025-04-10 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:32.592733 | orchestrator | 2025-04-10 00:51:32 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:32.593304 | orchestrator | 2025-04-10 00:51:32 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:32.593342 | orchestrator | 2025-04-10 00:51:32 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:32.593820 | orchestrator | 2025-04-10 00:51:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:32.594603 | orchestrator | 2025-04-10 00:51:32 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:35.628280 | orchestrator | 2025-04-10 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:35.628453 | orchestrator | 2025-04-10 00:51:35 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:35.631369 | orchestrator | 2025-04-10 00:51:35 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:35.631408 | orchestrator | 2025-04-10 00:51:35 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:38.672651 | orchestrator | 2025-04-10 00:51:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:38.672759 | orchestrator | 2025-04-10 00:51:35 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:38.672774 | orchestrator | 2025-04-10 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:38.672800 | orchestrator | 2025-04-10 00:51:38 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:38.673045 | orchestrator | 2025-04-10 00:51:38 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:38.675201 | orchestrator | 2025-04-10 00:51:38 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:38.675677 | orchestrator | 2025-04-10 00:51:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:38.678280 | orchestrator | 2025-04-10 00:51:38 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:41.727418 | orchestrator | 2025-04-10 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:41.727560 | orchestrator | 2025-04-10 00:51:41 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:41.727974 | orchestrator | 2025-04-10 00:51:41 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:41.728871 | orchestrator | 2025-04-10 00:51:41 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:41.729809 | orchestrator | 2025-04-10 00:51:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:41.730523 | orchestrator | 2025-04-10 00:51:41 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:44.773407 | orchestrator | 2025-04-10 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:44.773544 | orchestrator | 2025-04-10 00:51:44 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:44.774103 | orchestrator | 2025-04-10 00:51:44 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:44.775092 | orchestrator | 2025-04-10 00:51:44 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:44.775605 | orchestrator | 2025-04-10 00:51:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:44.777632 | orchestrator | 2025-04-10 00:51:44 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:47.811128 | orchestrator | 2025-04-10 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:47.811315 | orchestrator | 2025-04-10 00:51:47 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:47.811416 | orchestrator | 2025-04-10 00:51:47 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state STARTED 2025-04-10 00:51:47.812114 | orchestrator | 2025-04-10 00:51:47 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:47.812812 | orchestrator | 2025-04-10 00:51:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:47.813389 | orchestrator | 2025-04-10 00:51:47 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:47.813450 | orchestrator | 2025-04-10 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:50.869776 | orchestrator | 2025-04-10 00:51:50 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:50.870612 | orchestrator | 2025-04-10 00:51:50 | INFO  | Task c08af7ae-0d3c-4eee-9d53-27f0257f92e8 is in state SUCCESS 2025-04-10 00:51:50.870668 | orchestrator | 2025-04-10 00:51:50.870686 | orchestrator | 2025-04-10 00:51:50.870701 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-10 00:51:50.870716 | orchestrator | 2025-04-10 00:51:50.870730 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-10 00:51:50.870744 | orchestrator | Thursday 10 April 2025 00:49:25 +0000 (0:00:00.186) 0:00:00.186 ******** 2025-04-10 00:51:50.870759 | orchestrator | ok: [localhost] => { 2025-04-10 00:51:50.870775 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-10 00:51:50.870790 | orchestrator | } 2025-04-10 00:51:50.870804 | orchestrator | 2025-04-10 00:51:50.870843 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-10 00:51:50.870858 | orchestrator | Thursday 10 April 2025 00:49:25 +0000 (0:00:00.047) 0:00:00.234 ******** 2025-04-10 00:51:50.870873 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-10 00:51:50.870888 | orchestrator | ...ignoring 2025-04-10 00:51:50.870931 | orchestrator | 2025-04-10 00:51:50.870946 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-10 00:51:50.870960 | orchestrator | Thursday 10 April 2025 00:49:29 +0000 (0:00:03.385) 0:00:03.619 ******** 2025-04-10 00:51:50.870974 | orchestrator | skipping: [localhost] 2025-04-10 00:51:50.870988 | orchestrator | 2025-04-10 00:51:50.871002 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-10 00:51:50.871016 | orchestrator | Thursday 10 April 2025 00:49:29 +0000 (0:00:00.139) 0:00:03.759 ******** 2025-04-10 00:51:50.871030 | orchestrator | ok: [localhost] 2025-04-10 00:51:50.871044 | orchestrator | 2025-04-10 00:51:50.871058 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:51:50.871072 | orchestrator | 2025-04-10 00:51:50.871086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:51:50.871099 | orchestrator | Thursday 10 April 2025 00:49:29 +0000 (0:00:00.515) 0:00:04.274 ******** 2025-04-10 00:51:50.871113 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:51:50.871128 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:51:50.871142 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:51:50.871155 | orchestrator | 2025-04-10 00:51:50.871170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:51:50.871184 | orchestrator | Thursday 10 April 2025 00:49:30 +0000 (0:00:00.912) 0:00:05.186 ******** 2025-04-10 00:51:50.871200 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-10 00:51:50.871217 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-10 00:51:50.871232 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-10 00:51:50.871247 | orchestrator | 2025-04-10 00:51:50.871263 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-10 00:51:50.871279 | orchestrator | 2025-04-10 00:51:50.871295 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-10 00:51:50.871311 | orchestrator | Thursday 10 April 2025 00:49:31 +0000 (0:00:00.655) 0:00:05.842 ******** 2025-04-10 00:51:50.871327 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:51:50.871344 | orchestrator | 2025-04-10 00:51:50.871360 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-10 00:51:50.871376 | orchestrator | Thursday 10 April 2025 00:49:33 +0000 (0:00:01.808) 0:00:07.651 ******** 2025-04-10 00:51:50.871391 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:51:50.871407 | orchestrator | 2025-04-10 00:51:50.871422 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-10 00:51:50.871438 | orchestrator | Thursday 10 April 2025 00:49:34 +0000 (0:00:01.636) 0:00:09.287 ******** 2025-04-10 00:51:50.871453 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.871470 | orchestrator | 2025-04-10 00:51:50.871486 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-10 00:51:50.871515 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:00.647) 0:00:09.935 ******** 2025-04-10 00:51:50.871531 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.871547 | orchestrator | 2025-04-10 00:51:50.871561 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-10 00:51:50.871575 | orchestrator | Thursday 10 April 2025 00:49:36 +0000 (0:00:01.388) 0:00:11.324 ******** 2025-04-10 00:51:50.871589 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.871603 | orchestrator | 2025-04-10 00:51:50.871734 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-10 00:51:50.871764 | orchestrator | Thursday 10 April 2025 00:49:38 +0000 (0:00:01.515) 0:00:12.839 ******** 2025-04-10 00:51:50.871779 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.871793 | orchestrator | 2025-04-10 00:51:50.871807 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-10 00:51:50.871821 | orchestrator | Thursday 10 April 2025 00:49:39 +0000 (0:00:00.680) 0:00:13.520 ******** 2025-04-10 00:51:50.871835 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-04-10 00:51:50.871849 | orchestrator | 2025-04-10 00:51:50.871863 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-10 00:51:50.871877 | orchestrator | Thursday 10 April 2025 00:49:40 +0000 (0:00:00.987) 0:00:14.507 ******** 2025-04-10 00:51:50.871909 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:51:50.871925 | orchestrator | 2025-04-10 00:51:50.871939 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-10 00:51:50.871953 | orchestrator | Thursday 10 April 2025 00:49:40 +0000 (0:00:00.791) 0:00:15.299 ******** 2025-04-10 00:51:50.871967 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.871981 | orchestrator | 2025-04-10 00:51:50.871995 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-10 00:51:50.872009 | orchestrator | Thursday 10 April 2025 00:49:41 +0000 (0:00:00.320) 0:00:15.620 ******** 2025-04-10 00:51:50.872023 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.872036 | orchestrator | 2025-04-10 00:51:50.872060 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-10 00:51:50.872075 | orchestrator | Thursday 10 April 2025 00:49:41 +0000 (0:00:00.396) 0:00:16.020 ******** 2025-04-10 00:51:50.872092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872148 | orchestrator | 2025-04-10 00:51:50.872162 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-10 00:51:50.872176 | orchestrator | Thursday 10 April 2025 00:49:42 +0000 (0:00:01.165) 0:00:17.185 ******** 2025-04-10 00:51:50.872201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872259 | orchestrator | 2025-04-10 00:51:50.872274 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-10 00:51:50.872288 | orchestrator | Thursday 10 April 2025 00:49:44 +0000 (0:00:01.651) 0:00:18.837 ******** 2025-04-10 00:51:50.872302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-10 00:51:50.872316 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-10 00:51:50.872330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-10 00:51:50.872344 | orchestrator | 2025-04-10 00:51:50.872358 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-10 00:51:50.872378 | orchestrator | Thursday 10 April 2025 00:49:47 +0000 (0:00:02.615) 0:00:21.452 ******** 2025-04-10 00:51:50.872393 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-10 00:51:50.872407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-10 00:51:50.872420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-10 00:51:50.872434 | orchestrator | 2025-04-10 00:51:50.872448 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-10 00:51:50.872462 | orchestrator | Thursday 10 April 2025 00:49:50 +0000 (0:00:03.824) 0:00:25.276 ******** 2025-04-10 00:51:50.872476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-10 00:51:50.872489 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-10 00:51:50.872503 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-10 00:51:50.872517 | orchestrator | 2025-04-10 00:51:50.872537 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-10 00:51:50.872551 | orchestrator | Thursday 10 April 2025 00:49:54 +0000 (0:00:03.386) 0:00:28.663 ******** 2025-04-10 00:51:50.872565 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-10 00:51:50.872579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-10 00:51:50.872593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-10 00:51:50.872607 | orchestrator | 2025-04-10 00:51:50.872621 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-10 00:51:50.872635 | orchestrator | Thursday 10 April 2025 00:49:56 +0000 (0:00:02.023) 0:00:30.687 ******** 2025-04-10 00:51:50.872649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-10 00:51:50.872663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-10 00:51:50.872677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-10 00:51:50.872691 | orchestrator | 2025-04-10 00:51:50.872705 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-10 00:51:50.872724 | orchestrator | Thursday 10 April 2025 00:49:57 +0000 (0:00:01.682) 0:00:32.370 ******** 2025-04-10 00:51:50.872738 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-10 00:51:50.872752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-10 00:51:50.872772 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-10 00:51:50.872786 | orchestrator | 2025-04-10 00:51:50.872800 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-10 00:51:50.872815 | orchestrator | Thursday 10 April 2025 00:49:59 +0000 (0:00:01.866) 0:00:34.236 ******** 2025-04-10 00:51:50.872829 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.872843 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:51:50.872857 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:51:50.872871 | orchestrator | 2025-04-10 00:51:50.872884 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-10 00:51:50.872931 | orchestrator | Thursday 10 April 2025 00:50:01 +0000 (0:00:01.691) 0:00:35.927 ******** 2025-04-10 00:51:50.872947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.872987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:51:50.873009 | orchestrator | 2025-04-10 00:51:50.873024 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-10 00:51:50.873038 | orchestrator | Thursday 10 April 2025 00:50:03 +0000 (0:00:02.078) 0:00:38.005 ******** 2025-04-10 00:51:50.873051 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:51:50.873066 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:51:50.873080 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:51:50.873094 | orchestrator | 2025-04-10 00:51:50.873107 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-10 00:51:50.873121 | orchestrator | Thursday 10 April 2025 00:50:04 +0000 (0:00:01.120) 0:00:39.126 ******** 2025-04-10 00:51:50.873135 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:51:50.873149 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:51:50.873163 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:51:50.873176 | orchestrator | 2025-04-10 00:51:50.873190 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-10 00:51:50.873204 | orchestrator | Thursday 10 April 2025 00:50:10 +0000 (0:00:06.146) 0:00:45.273 ******** 2025-04-10 00:51:50.873218 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:51:50.873232 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:51:50.873246 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:51:50.873260 | orchestrator | 2025-04-10 00:51:50.873274 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-10 00:51:50.873288 | orchestrator | 2025-04-10 00:51:50.873301 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-10 00:51:50.873315 | orchestrator | Thursday 10 April 2025 00:50:11 +0000 (0:00:00.356) 0:00:45.629 ******** 2025-04-10 00:51:50.873329 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:51:50.873343 | orchestrator | 2025-04-10 00:51:50.873357 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-10 00:51:50.873371 | orchestrator | Thursday 10 April 2025 00:50:11 +0000 (0:00:00.754) 0:00:46.383 ******** 2025-04-10 00:51:50.873384 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:51:50.873398 | orchestrator | 2025-04-10 00:51:50.873412 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-10 00:51:50.873426 | orchestrator | Thursday 10 April 2025 00:50:12 +0000 (0:00:00.264) 0:00:46.648 ******** 2025-04-10 00:51:50.873440 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:51:50.873454 | orchestrator | 2025-04-10 00:51:50.873468 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-10 00:51:50.873482 | orchestrator | Thursday 10 April 2025 00:50:19 +0000 (0:00:06.824) 0:00:53.473 ******** 2025-04-10 00:51:50.873495 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:51:50.873509 | orchestrator | 2025-04-10 00:51:50.873523 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-10 00:51:50.873537 | orchestrator | 2025-04-10 00:51:50.873551 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-10 00:51:50.873564 | orchestrator | Thursday 10 April 2025 00:51:09 +0000 (0:00:50.317) 0:01:43.790 ******** 2025-04-10 00:51:50.873578 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:51:50.873592 | orchestrator | 2025-04-10 00:51:50.873606 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-10 00:51:50.873620 | orchestrator | Thursday 10 April 2025 00:51:10 +0000 (0:00:00.735) 0:01:44.525 ******** 2025-04-10 00:51:50.873633 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:51:50.873647 | orchestrator | 2025-04-10 00:51:50.873661 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-10 00:51:50.873675 | orchestrator | Thursday 10 April 2025 00:51:10 +0000 (0:00:00.393) 0:01:44.919 ******** 2025-04-10 00:51:50.873689 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:51:50.873702 | orchestrator | 2025-04-10 00:51:50.873716 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-10 00:51:50.873730 | orchestrator | Thursday 10 April 2025 00:51:12 +0000 (0:00:01.934) 0:01:46.854 ******** 2025-04-10 00:51:50.873750 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:51:50.873764 | orchestrator | 2025-04-10 00:51:50.873778 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-10 00:51:50.873792 | orchestrator | 2025-04-10 00:51:50.873806 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-10 00:51:50.873820 | orchestrator | Thursday 10 April 2025 00:51:27 +0000 (0:00:15.350) 0:02:02.204 ******** 2025-04-10 00:51:50.873833 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:51:50.873847 | orchestrator | 2025-04-10 00:51:50.873866 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-10 00:51:50.873881 | orchestrator | Thursday 10 April 2025 00:51:28 +0000 (0:00:00.646) 0:02:02.851 ******** 2025-04-10 00:51:50.873951 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:51:50.873973 | orchestrator | 2025-04-10 00:51:50.873988 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-10 00:51:50.874008 | orchestrator | Thursday 10 April 2025 00:51:28 +0000 (0:00:00.270) 0:02:03.122 ******** 2025-04-10 00:51:50.874183 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:51:50.874205 | orchestrator | 2025-04-10 00:51:50.874219 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-10 00:51:50.874231 | orchestrator | Thursday 10 April 2025 00:51:30 +0000 (0:00:02.020) 0:02:05.142 ******** 2025-04-10 00:51:50.874243 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:51:50.874256 | orchestrator | 2025-04-10 00:51:50.874268 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-10 00:51:50.874281 | orchestrator | 2025-04-10 00:51:50.874293 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-10 00:51:50.874306 | orchestrator | Thursday 10 April 2025 00:51:43 +0000 (0:00:13.184) 0:02:18.327 ******** 2025-04-10 00:51:50.874318 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:51:50.874330 | orchestrator | 2025-04-10 00:51:50.874343 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-10 00:51:50.874355 | orchestrator | Thursday 10 April 2025 00:51:44 +0000 (0:00:00.669) 0:02:18.997 ******** 2025-04-10 00:51:50.874367 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-10 00:51:50.874379 | orchestrator | enable_outward_rabbitmq_True 2025-04-10 00:51:50.874392 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-10 00:51:50.874404 | orchestrator | outward_rabbitmq_restart 2025-04-10 00:51:50.874417 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:51:50.874429 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:51:50.874442 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:51:50.874454 | orchestrator | 2025-04-10 00:51:50.874467 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-10 00:51:50.874479 | orchestrator | skipping: no hosts matched 2025-04-10 00:51:50.874491 | orchestrator | 2025-04-10 00:51:50.874503 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-10 00:51:50.874515 | orchestrator | skipping: no hosts matched 2025-04-10 00:51:50.874528 | orchestrator | 2025-04-10 00:51:50.874540 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-10 00:51:50.874552 | orchestrator | skipping: no hosts matched 2025-04-10 00:51:50.874564 | orchestrator | 2025-04-10 00:51:50.874577 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:51:50.874589 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-10 00:51:50.874602 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-10 00:51:50.874615 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:51:50.874636 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 00:51:50.874649 | orchestrator | 2025-04-10 00:51:50.874661 | orchestrator | 2025-04-10 00:51:50.874673 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:51:50.874686 | orchestrator | Thursday 10 April 2025 00:51:47 +0000 (0:00:03.000) 0:02:21.997 ******** 2025-04-10 00:51:50.874698 | orchestrator | =============================================================================== 2025-04-10 00:51:50.874711 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.86s 2025-04-10 00:51:50.874723 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.78s 2025-04-10 00:51:50.874735 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.15s 2025-04-10 00:51:50.874748 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.82s 2025-04-10 00:51:50.874760 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.39s 2025-04-10 00:51:50.874772 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.38s 2025-04-10 00:51:50.874784 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.00s 2025-04-10 00:51:50.874796 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.62s 2025-04-10 00:51:50.874809 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.13s 2025-04-10 00:51:50.874823 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.08s 2025-04-10 00:51:50.874837 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.02s 2025-04-10 00:51:50.874851 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.87s 2025-04-10 00:51:50.874865 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.81s 2025-04-10 00:51:50.874884 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.69s 2025-04-10 00:51:50.874918 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.68s 2025-04-10 00:51:50.874932 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2025-04-10 00:51:50.874946 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.64s 2025-04-10 00:51:50.874960 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.52s 2025-04-10 00:51:50.874975 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.39s 2025-04-10 00:51:50.874988 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.17s 2025-04-10 00:51:50.875007 | orchestrator | 2025-04-10 00:51:50 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:53.914105 | orchestrator | 2025-04-10 00:51:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:53.914224 | orchestrator | 2025-04-10 00:51:50 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:53.914241 | orchestrator | 2025-04-10 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:53.914270 | orchestrator | 2025-04-10 00:51:53 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:53.915131 | orchestrator | 2025-04-10 00:51:53 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:53.916252 | orchestrator | 2025-04-10 00:51:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:53.917365 | orchestrator | 2025-04-10 00:51:53 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:53.917534 | orchestrator | 2025-04-10 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:51:56.958417 | orchestrator | 2025-04-10 00:51:56 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:51:56.960511 | orchestrator | 2025-04-10 00:51:56 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:51:56.961727 | orchestrator | 2025-04-10 00:51:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:51:56.962799 | orchestrator | 2025-04-10 00:51:56 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:51:56.963116 | orchestrator | 2025-04-10 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:00.025041 | orchestrator | 2025-04-10 00:52:00 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:00.026261 | orchestrator | 2025-04-10 00:52:00 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:00.026441 | orchestrator | 2025-04-10 00:52:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:00.027648 | orchestrator | 2025-04-10 00:52:00 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:03.069885 | orchestrator | 2025-04-10 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:03.070120 | orchestrator | 2025-04-10 00:52:03 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:03.070636 | orchestrator | 2025-04-10 00:52:03 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:03.072153 | orchestrator | 2025-04-10 00:52:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:03.073756 | orchestrator | 2025-04-10 00:52:03 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:03.074439 | orchestrator | 2025-04-10 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:06.116222 | orchestrator | 2025-04-10 00:52:06 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:06.120702 | orchestrator | 2025-04-10 00:52:06 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:06.121532 | orchestrator | 2025-04-10 00:52:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:06.121559 | orchestrator | 2025-04-10 00:52:06 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:09.180641 | orchestrator | 2025-04-10 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:09.180816 | orchestrator | 2025-04-10 00:52:09 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:09.182580 | orchestrator | 2025-04-10 00:52:09 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:09.185338 | orchestrator | 2025-04-10 00:52:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:09.185795 | orchestrator | 2025-04-10 00:52:09 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:12.244766 | orchestrator | 2025-04-10 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:12.244970 | orchestrator | 2025-04-10 00:52:12 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:12.247146 | orchestrator | 2025-04-10 00:52:12 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:12.247181 | orchestrator | 2025-04-10 00:52:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:12.248818 | orchestrator | 2025-04-10 00:52:12 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:12.249062 | orchestrator | 2025-04-10 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:15.283786 | orchestrator | 2025-04-10 00:52:15 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:15.286432 | orchestrator | 2025-04-10 00:52:15 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:15.294634 | orchestrator | 2025-04-10 00:52:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:18.338495 | orchestrator | 2025-04-10 00:52:15 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:18.338618 | orchestrator | 2025-04-10 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:18.338654 | orchestrator | 2025-04-10 00:52:18 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:18.343114 | orchestrator | 2025-04-10 00:52:18 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:18.344771 | orchestrator | 2025-04-10 00:52:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:18.345445 | orchestrator | 2025-04-10 00:52:18 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:21.393286 | orchestrator | 2025-04-10 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:21.393387 | orchestrator | 2025-04-10 00:52:21 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:21.393700 | orchestrator | 2025-04-10 00:52:21 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:21.393714 | orchestrator | 2025-04-10 00:52:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:21.394779 | orchestrator | 2025-04-10 00:52:21 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:24.444042 | orchestrator | 2025-04-10 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:24.444299 | orchestrator | 2025-04-10 00:52:24 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:24.444395 | orchestrator | 2025-04-10 00:52:24 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:24.445139 | orchestrator | 2025-04-10 00:52:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:24.445936 | orchestrator | 2025-04-10 00:52:24 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:27.494372 | orchestrator | 2025-04-10 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:27.494512 | orchestrator | 2025-04-10 00:52:27 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:27.498514 | orchestrator | 2025-04-10 00:52:27 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:27.503756 | orchestrator | 2025-04-10 00:52:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:27.505592 | orchestrator | 2025-04-10 00:52:27 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:27.505730 | orchestrator | 2025-04-10 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:30.565203 | orchestrator | 2025-04-10 00:52:30 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:30.566489 | orchestrator | 2025-04-10 00:52:30 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:30.566560 | orchestrator | 2025-04-10 00:52:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:30.568809 | orchestrator | 2025-04-10 00:52:30 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:33.625748 | orchestrator | 2025-04-10 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:33.625935 | orchestrator | 2025-04-10 00:52:33 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:33.628568 | orchestrator | 2025-04-10 00:52:33 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:33.632308 | orchestrator | 2025-04-10 00:52:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:33.634437 | orchestrator | 2025-04-10 00:52:33 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:33.635185 | orchestrator | 2025-04-10 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:36.683130 | orchestrator | 2025-04-10 00:52:36 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:36.684098 | orchestrator | 2025-04-10 00:52:36 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:36.684244 | orchestrator | 2025-04-10 00:52:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:36.684637 | orchestrator | 2025-04-10 00:52:36 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:39.728232 | orchestrator | 2025-04-10 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:39.728366 | orchestrator | 2025-04-10 00:52:39 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:39.729342 | orchestrator | 2025-04-10 00:52:39 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:39.729938 | orchestrator | 2025-04-10 00:52:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:39.731010 | orchestrator | 2025-04-10 00:52:39 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:42.789863 | orchestrator | 2025-04-10 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:42.790107 | orchestrator | 2025-04-10 00:52:42 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:42.792621 | orchestrator | 2025-04-10 00:52:42 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:42.792660 | orchestrator | 2025-04-10 00:52:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:42.793070 | orchestrator | 2025-04-10 00:52:42 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:42.793348 | orchestrator | 2025-04-10 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:45.835032 | orchestrator | 2025-04-10 00:52:45 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:45.835317 | orchestrator | 2025-04-10 00:52:45 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:45.836048 | orchestrator | 2025-04-10 00:52:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:45.836668 | orchestrator | 2025-04-10 00:52:45 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:45.836741 | orchestrator | 2025-04-10 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:48.882985 | orchestrator | 2025-04-10 00:52:48 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:48.883206 | orchestrator | 2025-04-10 00:52:48 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:48.886167 | orchestrator | 2025-04-10 00:52:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:48.886561 | orchestrator | 2025-04-10 00:52:48 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:48.888924 | orchestrator | 2025-04-10 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:51.925968 | orchestrator | 2025-04-10 00:52:51 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:51.927734 | orchestrator | 2025-04-10 00:52:51 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:51.930294 | orchestrator | 2025-04-10 00:52:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:51.932137 | orchestrator | 2025-04-10 00:52:51 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:54.970516 | orchestrator | 2025-04-10 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:54.970659 | orchestrator | 2025-04-10 00:52:54 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:54.971283 | orchestrator | 2025-04-10 00:52:54 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state STARTED 2025-04-10 00:52:54.973320 | orchestrator | 2025-04-10 00:52:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:52:58.030345 | orchestrator | 2025-04-10 00:52:54 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:52:58.030468 | orchestrator | 2025-04-10 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:52:58.030505 | orchestrator | 2025-04-10 00:52:58 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:52:58.035009 | orchestrator | 2025-04-10 00:52:58.035091 | orchestrator | 2025-04-10 00:52:58.035120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:52:58.035146 | orchestrator | 2025-04-10 00:52:58.035316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:52:58.035346 | orchestrator | Thursday 10 April 2025 00:50:28 +0000 (0:00:00.421) 0:00:00.421 ******** 2025-04-10 00:52:58.035373 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:52:58.035401 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:52:58.035427 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:52:58.035453 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.035478 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.035504 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.035530 | orchestrator | 2025-04-10 00:52:58.035557 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:52:58.035583 | orchestrator | Thursday 10 April 2025 00:50:29 +0000 (0:00:00.944) 0:00:01.366 ******** 2025-04-10 00:52:58.035609 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-10 00:52:58.035636 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-10 00:52:58.035662 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-10 00:52:58.035679 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-10 00:52:58.035695 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-10 00:52:58.035745 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-10 00:52:58.035762 | orchestrator | 2025-04-10 00:52:58.035778 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-10 00:52:58.035794 | orchestrator | 2025-04-10 00:52:58.035809 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-10 00:52:58.035876 | orchestrator | Thursday 10 April 2025 00:50:31 +0000 (0:00:02.349) 0:00:03.716 ******** 2025-04-10 00:52:58.035917 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:52:58.035934 | orchestrator | 2025-04-10 00:52:58.035948 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-10 00:52:58.035962 | orchestrator | Thursday 10 April 2025 00:50:33 +0000 (0:00:01.329) 0:00:05.045 ******** 2025-04-10 00:52:58.035978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.035995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036083 | orchestrator | 2025-04-10 00:52:58.036097 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-10 00:52:58.036111 | orchestrator | Thursday 10 April 2025 00:50:34 +0000 (0:00:01.127) 0:00:06.173 ******** 2025-04-10 00:52:58.036137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036233 | orchestrator | 2025-04-10 00:52:58.036247 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-10 00:52:58.036261 | orchestrator | Thursday 10 April 2025 00:50:36 +0000 (0:00:02.185) 0:00:08.358 ******** 2025-04-10 00:52:58.036275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036382 | orchestrator | 2025-04-10 00:52:58.036397 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-10 00:52:58.036411 | orchestrator | Thursday 10 April 2025 00:50:37 +0000 (0:00:01.238) 0:00:09.597 ******** 2025-04-10 00:52:58.036425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036530 | orchestrator | 2025-04-10 00:52:58.036544 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-10 00:52:58.036559 | orchestrator | Thursday 10 April 2025 00:50:39 +0000 (0:00:02.115) 0:00:11.712 ******** 2025-04-10 00:52:58.036573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.036657 | orchestrator | 2025-04-10 00:52:58.036671 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-10 00:52:58.036685 | orchestrator | Thursday 10 April 2025 00:50:41 +0000 (0:00:02.110) 0:00:13.822 ******** 2025-04-10 00:52:58.036699 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:52:58.036714 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:52:58.036728 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:52:58.036741 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.036755 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.036769 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.036789 | orchestrator | 2025-04-10 00:52:58.036803 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-10 00:52:58.036817 | orchestrator | Thursday 10 April 2025 00:50:45 +0000 (0:00:03.429) 0:00:17.251 ******** 2025-04-10 00:52:58.036831 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-10 00:52:58.036845 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-10 00:52:58.036859 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-10 00:52:58.036879 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-10 00:52:58.036923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-10 00:52:58.036938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-10 00:52:58.036952 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-10 00:52:58.036966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-10 00:52:58.036980 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-10 00:52:58.037000 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-10 00:52:58.037015 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-10 00:52:58.037029 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-10 00:52:58.037043 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-10 00:52:58.037059 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-10 00:52:58.037073 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-10 00:52:58.037087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-10 00:52:58.037102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-10 00:52:58.037116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-10 00:52:58.037130 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-10 00:52:58.037145 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-10 00:52:58.037159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-10 00:52:58.037173 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-10 00:52:58.037186 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-10 00:52:58.037200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-10 00:52:58.037214 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-10 00:52:58.037228 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-10 00:52:58.037242 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-10 00:52:58.037256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-10 00:52:58.037277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-10 00:52:58.037292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-10 00:52:58.037306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-10 00:52:58.037320 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-10 00:52:58.037334 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-10 00:52:58.037348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-10 00:52:58.037362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-10 00:52:58.037376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-10 00:52:58.037390 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-10 00:52:58.037404 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-10 00:52:58.037418 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-10 00:52:58.037433 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-10 00:52:58.037453 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-10 00:52:58.037468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-10 00:52:58.037482 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-10 00:52:58.037496 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-10 00:52:58.037514 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-10 00:52:58.037537 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-10 00:52:58.037561 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-10 00:52:58.037585 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-10 00:52:58.037608 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-10 00:52:58.037633 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-10 00:52:58.037657 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-10 00:52:58.037672 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-10 00:52:58.037686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-10 00:52:58.037701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-10 00:52:58.037715 | orchestrator | 2025-04-10 00:52:58.037729 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-10 00:52:58.037744 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:19.798) 0:00:37.050 ******** 2025-04-10 00:52:58.037766 | orchestrator | 2025-04-10 00:52:58.037780 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-10 00:52:58.037794 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.067) 0:00:37.117 ******** 2025-04-10 00:52:58.037808 | orchestrator | 2025-04-10 00:52:58.037822 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-10 00:52:58.037836 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.294) 0:00:37.411 ******** 2025-04-10 00:52:58.037849 | orchestrator | 2025-04-10 00:52:58.037863 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-10 00:52:58.037877 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.109) 0:00:37.520 ******** 2025-04-10 00:52:58.037952 | orchestrator | 2025-04-10 00:52:58.037967 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-10 00:52:58.037981 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.061) 0:00:37.582 ******** 2025-04-10 00:52:58.037995 | orchestrator | 2025-04-10 00:52:58.038009 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-10 00:52:58.038073 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.062) 0:00:37.644 ******** 2025-04-10 00:52:58.038100 | orchestrator | 2025-04-10 00:52:58.038126 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-10 00:52:58.038151 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.271) 0:00:37.916 ******** 2025-04-10 00:52:58.038176 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.038204 | orchestrator | ok: [testbed-node-5] 2025-04-10 00:52:58.038230 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.038256 | orchestrator | ok: [testbed-node-3] 2025-04-10 00:52:58.038282 | orchestrator | ok: [testbed-node-4] 2025-04-10 00:52:58.038308 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.038335 | orchestrator | 2025-04-10 00:52:58.038362 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-10 00:52:58.038386 | orchestrator | Thursday 10 April 2025 00:51:08 +0000 (0:00:02.161) 0:00:40.077 ******** 2025-04-10 00:52:58.038408 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.038432 | orchestrator | changed: [testbed-node-4] 2025-04-10 00:52:58.038455 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.038479 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.038503 | orchestrator | changed: [testbed-node-3] 2025-04-10 00:52:58.038526 | orchestrator | changed: [testbed-node-5] 2025-04-10 00:52:58.038549 | orchestrator | 2025-04-10 00:52:58.038572 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-10 00:52:58.038595 | orchestrator | 2025-04-10 00:52:58.038618 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-10 00:52:58.038634 | orchestrator | Thursday 10 April 2025 00:51:26 +0000 (0:00:18.445) 0:00:58.523 ******** 2025-04-10 00:52:58.038646 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:52:58.038659 | orchestrator | 2025-04-10 00:52:58.038671 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-10 00:52:58.038690 | orchestrator | Thursday 10 April 2025 00:51:27 +0000 (0:00:00.657) 0:00:59.180 ******** 2025-04-10 00:52:58.038703 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:52:58.038716 | orchestrator | 2025-04-10 00:52:58.038735 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-10 00:52:58.038753 | orchestrator | Thursday 10 April 2025 00:51:28 +0000 (0:00:00.978) 0:01:00.158 ******** 2025-04-10 00:52:58.038765 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.038778 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.038790 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.038803 | orchestrator | 2025-04-10 00:52:58.038815 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-10 00:52:58.038828 | orchestrator | Thursday 10 April 2025 00:51:29 +0000 (0:00:01.044) 0:01:01.203 ******** 2025-04-10 00:52:58.038855 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.038867 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.038880 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.038913 | orchestrator | 2025-04-10 00:52:58.038925 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-10 00:52:58.038938 | orchestrator | Thursday 10 April 2025 00:51:29 +0000 (0:00:00.439) 0:01:01.642 ******** 2025-04-10 00:52:58.038950 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.038963 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.038976 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.038988 | orchestrator | 2025-04-10 00:52:58.039000 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-10 00:52:58.039013 | orchestrator | Thursday 10 April 2025 00:51:30 +0000 (0:00:00.600) 0:01:02.243 ******** 2025-04-10 00:52:58.039025 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.039038 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.039050 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.039063 | orchestrator | 2025-04-10 00:52:58.039076 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-10 00:52:58.039088 | orchestrator | Thursday 10 April 2025 00:51:30 +0000 (0:00:00.562) 0:01:02.806 ******** 2025-04-10 00:52:58.039100 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.039113 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.039125 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.039137 | orchestrator | 2025-04-10 00:52:58.039150 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-10 00:52:58.039162 | orchestrator | Thursday 10 April 2025 00:51:31 +0000 (0:00:00.386) 0:01:03.192 ******** 2025-04-10 00:52:58.039175 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039188 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039205 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039217 | orchestrator | 2025-04-10 00:52:58.039230 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-10 00:52:58.039243 | orchestrator | Thursday 10 April 2025 00:51:31 +0000 (0:00:00.741) 0:01:03.934 ******** 2025-04-10 00:52:58.039255 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039268 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039280 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039292 | orchestrator | 2025-04-10 00:52:58.039305 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-10 00:52:58.039317 | orchestrator | Thursday 10 April 2025 00:51:32 +0000 (0:00:00.497) 0:01:04.432 ******** 2025-04-10 00:52:58.039330 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039342 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039354 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039367 | orchestrator | 2025-04-10 00:52:58.039379 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-10 00:52:58.039392 | orchestrator | Thursday 10 April 2025 00:51:33 +0000 (0:00:00.850) 0:01:05.282 ******** 2025-04-10 00:52:58.039404 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039417 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039429 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039441 | orchestrator | 2025-04-10 00:52:58.039454 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-10 00:52:58.039466 | orchestrator | Thursday 10 April 2025 00:51:33 +0000 (0:00:00.546) 0:01:05.829 ******** 2025-04-10 00:52:58.039478 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039491 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039503 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039515 | orchestrator | 2025-04-10 00:52:58.039528 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-10 00:52:58.039541 | orchestrator | Thursday 10 April 2025 00:51:34 +0000 (0:00:00.557) 0:01:06.387 ******** 2025-04-10 00:52:58.039553 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039571 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039584 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039596 | orchestrator | 2025-04-10 00:52:58.039609 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-10 00:52:58.039621 | orchestrator | Thursday 10 April 2025 00:51:35 +0000 (0:00:00.677) 0:01:07.064 ******** 2025-04-10 00:52:58.039634 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039646 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039659 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039671 | orchestrator | 2025-04-10 00:52:58.039683 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-10 00:52:58.039696 | orchestrator | Thursday 10 April 2025 00:51:35 +0000 (0:00:00.715) 0:01:07.780 ******** 2025-04-10 00:52:58.039708 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039721 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039733 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039745 | orchestrator | 2025-04-10 00:52:58.039758 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-10 00:52:58.039770 | orchestrator | Thursday 10 April 2025 00:51:36 +0000 (0:00:00.382) 0:01:08.162 ******** 2025-04-10 00:52:58.039783 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039795 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039842 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039856 | orchestrator | 2025-04-10 00:52:58.039869 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-10 00:52:58.039881 | orchestrator | Thursday 10 April 2025 00:51:36 +0000 (0:00:00.605) 0:01:08.767 ******** 2025-04-10 00:52:58.039912 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.039925 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.039937 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.039949 | orchestrator | 2025-04-10 00:52:58.039969 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-10 00:52:58.039982 | orchestrator | Thursday 10 April 2025 00:51:37 +0000 (0:00:00.578) 0:01:09.346 ******** 2025-04-10 00:52:58.039994 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040007 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040020 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040032 | orchestrator | 2025-04-10 00:52:58.040044 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-10 00:52:58.040061 | orchestrator | Thursday 10 April 2025 00:51:37 +0000 (0:00:00.478) 0:01:09.824 ******** 2025-04-10 00:52:58.040074 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040087 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040099 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040112 | orchestrator | 2025-04-10 00:52:58.040124 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-10 00:52:58.040136 | orchestrator | Thursday 10 April 2025 00:51:38 +0000 (0:00:00.318) 0:01:10.143 ******** 2025-04-10 00:52:58.040149 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:52:58.040161 | orchestrator | 2025-04-10 00:52:58.040174 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-10 00:52:58.040186 | orchestrator | Thursday 10 April 2025 00:51:39 +0000 (0:00:01.164) 0:01:11.308 ******** 2025-04-10 00:52:58.040198 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.040211 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.040223 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.040236 | orchestrator | 2025-04-10 00:52:58.040248 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-10 00:52:58.040260 | orchestrator | Thursday 10 April 2025 00:51:40 +0000 (0:00:01.068) 0:01:12.376 ******** 2025-04-10 00:52:58.040273 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.040285 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.040305 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.040318 | orchestrator | 2025-04-10 00:52:58.040330 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-10 00:52:58.040343 | orchestrator | Thursday 10 April 2025 00:51:41 +0000 (0:00:01.040) 0:01:13.416 ******** 2025-04-10 00:52:58.040355 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040368 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040380 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040392 | orchestrator | 2025-04-10 00:52:58.040405 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-10 00:52:58.040417 | orchestrator | Thursday 10 April 2025 00:51:42 +0000 (0:00:00.645) 0:01:14.061 ******** 2025-04-10 00:52:58.040430 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040442 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040459 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040472 | orchestrator | 2025-04-10 00:52:58.040484 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-10 00:52:58.040497 | orchestrator | Thursday 10 April 2025 00:51:43 +0000 (0:00:00.938) 0:01:15.000 ******** 2025-04-10 00:52:58.040509 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040522 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040534 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040546 | orchestrator | 2025-04-10 00:52:58.040559 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-10 00:52:58.040571 | orchestrator | Thursday 10 April 2025 00:51:43 +0000 (0:00:00.462) 0:01:15.463 ******** 2025-04-10 00:52:58.040583 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040596 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040612 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040625 | orchestrator | 2025-04-10 00:52:58.040637 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-10 00:52:58.040650 | orchestrator | Thursday 10 April 2025 00:51:44 +0000 (0:00:00.565) 0:01:16.028 ******** 2025-04-10 00:52:58.040662 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040675 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040687 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040699 | orchestrator | 2025-04-10 00:52:58.040712 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-10 00:52:58.040724 | orchestrator | Thursday 10 April 2025 00:51:44 +0000 (0:00:00.554) 0:01:16.583 ******** 2025-04-10 00:52:58.040737 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.040749 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.040761 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.040774 | orchestrator | 2025-04-10 00:52:58.040786 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-10 00:52:58.040799 | orchestrator | Thursday 10 April 2025 00:51:45 +0000 (0:00:01.001) 0:01:17.585 ******** 2025-04-10 00:52:58.040812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.040827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.040854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-04-10 00:52:58 | INFO  | Task 9dbc5530-aa40-43db-909b-d0603b7fb2b3 is in state SUCCESS 2025-04-10 00:52:58.040877 | orchestrator | olla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.040942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.040962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.040976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.040988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041026 | orchestrator | 2025-04-10 00:52:58.041039 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-10 00:52:58.041051 | orchestrator | Thursday 10 April 2025 00:51:47 +0000 (0:00:01.708) 0:01:19.293 ******** 2025-04-10 00:52:58.041064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041185 | orchestrator | 2025-04-10 00:52:58.041195 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-10 00:52:58.041205 | orchestrator | Thursday 10 April 2025 00:51:52 +0000 (0:00:05.479) 0:01:24.772 ******** 2025-04-10 00:52:58.041216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.041330 | orchestrator | 2025-04-10 00:52:58.041340 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-10 00:52:58.041351 | orchestrator | Thursday 10 April 2025 00:51:55 +0000 (0:00:02.406) 0:01:27.179 ******** 2025-04-10 00:52:58.041361 | orchestrator | 2025-04-10 00:52:58.041371 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-10 00:52:58.041381 | orchestrator | Thursday 10 April 2025 00:51:55 +0000 (0:00:00.066) 0:01:27.246 ******** 2025-04-10 00:52:58.041391 | orchestrator | 2025-04-10 00:52:58.041401 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-10 00:52:58.041411 | orchestrator | Thursday 10 April 2025 00:51:55 +0000 (0:00:00.057) 0:01:27.304 ******** 2025-04-10 00:52:58.041421 | orchestrator | 2025-04-10 00:52:58.041431 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-10 00:52:58.041453 | orchestrator | Thursday 10 April 2025 00:51:55 +0000 (0:00:00.224) 0:01:27.529 ******** 2025-04-10 00:52:58.041464 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.041474 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.041484 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.041494 | orchestrator | 2025-04-10 00:52:58.041504 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-10 00:52:58.041520 | orchestrator | Thursday 10 April 2025 00:51:58 +0000 (0:00:02.500) 0:01:30.030 ******** 2025-04-10 00:52:58.041538 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.041554 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.041572 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.041586 | orchestrator | 2025-04-10 00:52:58.041596 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-10 00:52:58.041607 | orchestrator | Thursday 10 April 2025 00:52:05 +0000 (0:00:07.840) 0:01:37.870 ******** 2025-04-10 00:52:58.041617 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.041627 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.041637 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.041647 | orchestrator | 2025-04-10 00:52:58.041657 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-10 00:52:58.041667 | orchestrator | Thursday 10 April 2025 00:52:12 +0000 (0:00:06.937) 0:01:44.807 ******** 2025-04-10 00:52:58.041678 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.041688 | orchestrator | 2025-04-10 00:52:58.041698 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-10 00:52:58.041708 | orchestrator | Thursday 10 April 2025 00:52:13 +0000 (0:00:00.142) 0:01:44.950 ******** 2025-04-10 00:52:58.041718 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.041734 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.041745 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.041755 | orchestrator | 2025-04-10 00:52:58.041766 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-10 00:52:58.041776 | orchestrator | Thursday 10 April 2025 00:52:14 +0000 (0:00:01.113) 0:01:46.063 ******** 2025-04-10 00:52:58.041786 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.041796 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.041806 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.041816 | orchestrator | 2025-04-10 00:52:58.041826 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-10 00:52:58.041836 | orchestrator | Thursday 10 April 2025 00:52:14 +0000 (0:00:00.629) 0:01:46.692 ******** 2025-04-10 00:52:58.041846 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.041857 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.041867 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.041877 | orchestrator | 2025-04-10 00:52:58.041908 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-10 00:52:58.041919 | orchestrator | Thursday 10 April 2025 00:52:15 +0000 (0:00:00.986) 0:01:47.679 ******** 2025-04-10 00:52:58.041929 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.041939 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.041949 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.041959 | orchestrator | 2025-04-10 00:52:58.041970 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-10 00:52:58.041980 | orchestrator | Thursday 10 April 2025 00:52:16 +0000 (0:00:00.671) 0:01:48.350 ******** 2025-04-10 00:52:58.041990 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.042000 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.042011 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.042049 | orchestrator | 2025-04-10 00:52:58.042060 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-10 00:52:58.042071 | orchestrator | Thursday 10 April 2025 00:52:17 +0000 (0:00:01.125) 0:01:49.476 ******** 2025-04-10 00:52:58.042081 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.042091 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.042110 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.042120 | orchestrator | 2025-04-10 00:52:58.042131 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-10 00:52:58.042141 | orchestrator | Thursday 10 April 2025 00:52:18 +0000 (0:00:00.744) 0:01:50.220 ******** 2025-04-10 00:52:58.042151 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.042161 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.042172 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.042182 | orchestrator | 2025-04-10 00:52:58.042192 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-10 00:52:58.042202 | orchestrator | Thursday 10 April 2025 00:52:18 +0000 (0:00:00.513) 0:01:50.734 ******** 2025-04-10 00:52:58.042213 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042224 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042235 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042245 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042256 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042274 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042285 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042295 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042311 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042322 | orchestrator | 2025-04-10 00:52:58.042332 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-10 00:52:58.042342 | orchestrator | Thursday 10 April 2025 00:52:20 +0000 (0:00:01.884) 0:01:52.618 ******** 2025-04-10 00:52:58.042353 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042374 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042400 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042438 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042464 | orchestrator | 2025-04-10 00:52:58.042474 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-10 00:52:58.042485 | orchestrator | Thursday 10 April 2025 00:52:24 +0000 (0:00:04.321) 0:01:56.940 ******** 2025-04-10 00:52:58.042495 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042506 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042516 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042527 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042537 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042547 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042562 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042578 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 00:52:58.042604 | orchestrator | 2025-04-10 00:52:58.042615 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-10 00:52:58.042625 | orchestrator | Thursday 10 April 2025 00:52:28 +0000 (0:00:03.156) 0:02:00.097 ******** 2025-04-10 00:52:58.042636 | orchestrator | 2025-04-10 00:52:58.042646 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-10 00:52:58.042657 | orchestrator | Thursday 10 April 2025 00:52:28 +0000 (0:00:00.237) 0:02:00.334 ******** 2025-04-10 00:52:58.042667 | orchestrator | 2025-04-10 00:52:58.042677 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-10 00:52:58.042688 | orchestrator | Thursday 10 April 2025 00:52:28 +0000 (0:00:00.072) 0:02:00.407 ******** 2025-04-10 00:52:58.042698 | orchestrator | 2025-04-10 00:52:58.042708 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-10 00:52:58.042718 | orchestrator | Thursday 10 April 2025 00:52:28 +0000 (0:00:00.067) 0:02:00.475 ******** 2025-04-10 00:52:58.042729 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.042739 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.042749 | orchestrator | 2025-04-10 00:52:58.042760 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-10 00:52:58.042770 | orchestrator | Thursday 10 April 2025 00:52:35 +0000 (0:00:06.818) 0:02:07.293 ******** 2025-04-10 00:52:58.042780 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.042790 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.042801 | orchestrator | 2025-04-10 00:52:58.042811 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-10 00:52:58.042821 | orchestrator | Thursday 10 April 2025 00:52:41 +0000 (0:00:06.489) 0:02:13.782 ******** 2025-04-10 00:52:58.042831 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:52:58.042842 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:52:58.042852 | orchestrator | 2025-04-10 00:52:58.042862 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-10 00:52:58.042873 | orchestrator | Thursday 10 April 2025 00:52:48 +0000 (0:00:06.617) 0:02:20.400 ******** 2025-04-10 00:52:58.042897 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:52:58.042908 | orchestrator | 2025-04-10 00:52:58.042918 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-10 00:52:58.042929 | orchestrator | Thursday 10 April 2025 00:52:48 +0000 (0:00:00.447) 0:02:20.848 ******** 2025-04-10 00:52:58.042939 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.042949 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.042960 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.042970 | orchestrator | 2025-04-10 00:52:58.042980 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-10 00:52:58.042990 | orchestrator | Thursday 10 April 2025 00:52:50 +0000 (0:00:01.258) 0:02:22.106 ******** 2025-04-10 00:52:58.043000 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.043011 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.043021 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.043031 | orchestrator | 2025-04-10 00:52:58.043042 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-10 00:52:58.043052 | orchestrator | Thursday 10 April 2025 00:52:51 +0000 (0:00:00.935) 0:02:23.041 ******** 2025-04-10 00:52:58.043062 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.043085 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.043096 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.043107 | orchestrator | 2025-04-10 00:52:58.043117 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-10 00:52:58.043127 | orchestrator | Thursday 10 April 2025 00:52:52 +0000 (0:00:00.990) 0:02:24.032 ******** 2025-04-10 00:52:58.043138 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:52:58.043148 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:52:58.043158 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:52:58.043168 | orchestrator | 2025-04-10 00:52:58.043178 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-10 00:52:58.043189 | orchestrator | Thursday 10 April 2025 00:52:52 +0000 (0:00:00.894) 0:02:24.926 ******** 2025-04-10 00:52:58.043199 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.043209 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.043219 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.043229 | orchestrator | 2025-04-10 00:52:58.043240 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-10 00:52:58.043250 | orchestrator | Thursday 10 April 2025 00:52:53 +0000 (0:00:00.908) 0:02:25.835 ******** 2025-04-10 00:52:58.043260 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:52:58.043270 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:52:58.043280 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:52:58.043290 | orchestrator | 2025-04-10 00:52:58.043300 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:52:58.043311 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-10 00:52:58.043327 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-10 00:53:01.078534 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-10 00:53:01.078642 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:53:01.078673 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:53:01.078682 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 00:53:01.078691 | orchestrator | 2025-04-10 00:53:01.078701 | orchestrator | 2025-04-10 00:53:01.078711 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:53:01.078722 | orchestrator | Thursday 10 April 2025 00:52:55 +0000 (0:00:01.348) 0:02:27.183 ******** 2025-04-10 00:53:01.078731 | orchestrator | =============================================================================== 2025-04-10 00:53:01.078740 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.80s 2025-04-10 00:53:01.078748 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 18.45s 2025-04-10 00:53:01.078757 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.33s 2025-04-10 00:53:01.078766 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.56s 2025-04-10 00:53:01.078774 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.32s 2025-04-10 00:53:01.078783 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.48s 2025-04-10 00:53:01.078798 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.32s 2025-04-10 00:53:01.078807 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.43s 2025-04-10 00:53:01.078816 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.16s 2025-04-10 00:53:01.078825 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.41s 2025-04-10 00:53:01.078851 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.35s 2025-04-10 00:53:01.078860 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.19s 2025-04-10 00:53:01.078869 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.16s 2025-04-10 00:53:01.078877 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.12s 2025-04-10 00:53:01.078932 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.11s 2025-04-10 00:53:01.078941 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.88s 2025-04-10 00:53:01.078950 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.71s 2025-04-10 00:53:01.078959 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.35s 2025-04-10 00:53:01.078968 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.33s 2025-04-10 00:53:01.078976 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.26s 2025-04-10 00:53:01.078986 | orchestrator | 2025-04-10 00:52:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:01.078995 | orchestrator | 2025-04-10 00:52:58 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:01.079004 | orchestrator | 2025-04-10 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:01.079027 | orchestrator | 2025-04-10 00:53:01 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:01.080071 | orchestrator | 2025-04-10 00:53:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:01.082565 | orchestrator | 2025-04-10 00:53:01 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:04.132793 | orchestrator | 2025-04-10 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:04.132966 | orchestrator | 2025-04-10 00:53:04 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:04.135701 | orchestrator | 2025-04-10 00:53:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:04.138609 | orchestrator | 2025-04-10 00:53:04 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:04.138658 | orchestrator | 2025-04-10 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:07.189341 | orchestrator | 2025-04-10 00:53:07 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:07.193956 | orchestrator | 2025-04-10 00:53:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:07.197161 | orchestrator | 2025-04-10 00:53:07 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:10.247254 | orchestrator | 2025-04-10 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:10.247390 | orchestrator | 2025-04-10 00:53:10 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:10.248796 | orchestrator | 2025-04-10 00:53:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:10.250148 | orchestrator | 2025-04-10 00:53:10 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:10.250476 | orchestrator | 2025-04-10 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:13.306462 | orchestrator | 2025-04-10 00:53:13 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:13.308208 | orchestrator | 2025-04-10 00:53:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:13.310342 | orchestrator | 2025-04-10 00:53:13 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:16.373243 | orchestrator | 2025-04-10 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:16.373377 | orchestrator | 2025-04-10 00:53:16 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:16.374552 | orchestrator | 2025-04-10 00:53:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:16.374978 | orchestrator | 2025-04-10 00:53:16 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:19.422093 | orchestrator | 2025-04-10 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:19.422240 | orchestrator | 2025-04-10 00:53:19 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:19.424814 | orchestrator | 2025-04-10 00:53:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:19.428198 | orchestrator | 2025-04-10 00:53:19 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:22.464818 | orchestrator | 2025-04-10 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:22.465011 | orchestrator | 2025-04-10 00:53:22 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:22.467672 | orchestrator | 2025-04-10 00:53:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:22.469707 | orchestrator | 2025-04-10 00:53:22 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:22.470130 | orchestrator | 2025-04-10 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:25.514729 | orchestrator | 2025-04-10 00:53:25 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:25.516075 | orchestrator | 2025-04-10 00:53:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:25.516127 | orchestrator | 2025-04-10 00:53:25 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:28.566344 | orchestrator | 2025-04-10 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:28.566487 | orchestrator | 2025-04-10 00:53:28 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:28.567107 | orchestrator | 2025-04-10 00:53:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:28.571272 | orchestrator | 2025-04-10 00:53:28 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:31.620267 | orchestrator | 2025-04-10 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:31.620418 | orchestrator | 2025-04-10 00:53:31 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:31.621475 | orchestrator | 2025-04-10 00:53:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:31.623233 | orchestrator | 2025-04-10 00:53:31 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:34.678844 | orchestrator | 2025-04-10 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:34.679041 | orchestrator | 2025-04-10 00:53:34 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:34.679945 | orchestrator | 2025-04-10 00:53:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:34.682312 | orchestrator | 2025-04-10 00:53:34 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:37.735282 | orchestrator | 2025-04-10 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:37.735436 | orchestrator | 2025-04-10 00:53:37 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:37.738856 | orchestrator | 2025-04-10 00:53:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:37.741287 | orchestrator | 2025-04-10 00:53:37 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:40.791872 | orchestrator | 2025-04-10 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:40.792081 | orchestrator | 2025-04-10 00:53:40 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:43.833183 | orchestrator | 2025-04-10 00:53:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:43.833310 | orchestrator | 2025-04-10 00:53:40 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:43.833330 | orchestrator | 2025-04-10 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:43.833363 | orchestrator | 2025-04-10 00:53:43 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:43.834094 | orchestrator | 2025-04-10 00:53:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:43.835330 | orchestrator | 2025-04-10 00:53:43 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:46.884356 | orchestrator | 2025-04-10 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:46.884517 | orchestrator | 2025-04-10 00:53:46 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:46.886767 | orchestrator | 2025-04-10 00:53:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:46.887596 | orchestrator | 2025-04-10 00:53:46 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:49.940401 | orchestrator | 2025-04-10 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:49.940542 | orchestrator | 2025-04-10 00:53:49 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:49.941498 | orchestrator | 2025-04-10 00:53:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:49.944052 | orchestrator | 2025-04-10 00:53:49 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:49.944225 | orchestrator | 2025-04-10 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:52.996501 | orchestrator | 2025-04-10 00:53:52 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:52.997658 | orchestrator | 2025-04-10 00:53:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:53.000421 | orchestrator | 2025-04-10 00:53:53 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:53.000725 | orchestrator | 2025-04-10 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:56.053591 | orchestrator | 2025-04-10 00:53:56 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:53:56.057307 | orchestrator | 2025-04-10 00:53:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:53:56.058149 | orchestrator | 2025-04-10 00:53:56 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:53:59.105206 | orchestrator | 2025-04-10 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:53:59.105347 | orchestrator | 2025-04-10 00:53:59 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:02.152445 | orchestrator | 2025-04-10 00:53:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:02.152573 | orchestrator | 2025-04-10 00:53:59 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:02.152594 | orchestrator | 2025-04-10 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:02.152627 | orchestrator | 2025-04-10 00:54:02 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:02.153280 | orchestrator | 2025-04-10 00:54:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:02.155390 | orchestrator | 2025-04-10 00:54:02 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:02.155781 | orchestrator | 2025-04-10 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:05.206304 | orchestrator | 2025-04-10 00:54:05 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:05.208165 | orchestrator | 2025-04-10 00:54:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:05.209862 | orchestrator | 2025-04-10 00:54:05 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:08.265291 | orchestrator | 2025-04-10 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:08.265437 | orchestrator | 2025-04-10 00:54:08 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:08.269559 | orchestrator | 2025-04-10 00:54:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:08.270664 | orchestrator | 2025-04-10 00:54:08 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:11.312117 | orchestrator | 2025-04-10 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:11.312255 | orchestrator | 2025-04-10 00:54:11 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:11.313021 | orchestrator | 2025-04-10 00:54:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:11.316386 | orchestrator | 2025-04-10 00:54:11 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:11.316494 | orchestrator | 2025-04-10 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:14.356825 | orchestrator | 2025-04-10 00:54:14 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:14.360190 | orchestrator | 2025-04-10 00:54:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:14.362273 | orchestrator | 2025-04-10 00:54:14 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:14.362623 | orchestrator | 2025-04-10 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:17.415336 | orchestrator | 2025-04-10 00:54:17 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:17.419711 | orchestrator | 2025-04-10 00:54:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:17.421535 | orchestrator | 2025-04-10 00:54:17 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:17.422097 | orchestrator | 2025-04-10 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:20.471499 | orchestrator | 2025-04-10 00:54:20 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:20.473177 | orchestrator | 2025-04-10 00:54:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:20.478170 | orchestrator | 2025-04-10 00:54:20 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:23.524115 | orchestrator | 2025-04-10 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:23.524314 | orchestrator | 2025-04-10 00:54:23 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:23.524404 | orchestrator | 2025-04-10 00:54:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:23.525129 | orchestrator | 2025-04-10 00:54:23 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:26.574350 | orchestrator | 2025-04-10 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:26.574501 | orchestrator | 2025-04-10 00:54:26 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:26.575163 | orchestrator | 2025-04-10 00:54:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:26.576390 | orchestrator | 2025-04-10 00:54:26 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:29.615291 | orchestrator | 2025-04-10 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:29.615422 | orchestrator | 2025-04-10 00:54:29 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:29.616798 | orchestrator | 2025-04-10 00:54:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:29.617680 | orchestrator | 2025-04-10 00:54:29 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:29.617784 | orchestrator | 2025-04-10 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:32.669335 | orchestrator | 2025-04-10 00:54:32 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:32.673521 | orchestrator | 2025-04-10 00:54:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:32.675623 | orchestrator | 2025-04-10 00:54:32 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:35.735793 | orchestrator | 2025-04-10 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:35.735994 | orchestrator | 2025-04-10 00:54:35 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:35.736637 | orchestrator | 2025-04-10 00:54:35 | INFO  | Task 9759e2af-cd83-4f6a-8ea3-150ffe15ceb9 is in state STARTED 2025-04-10 00:54:35.736675 | orchestrator | 2025-04-10 00:54:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:35.739845 | orchestrator | 2025-04-10 00:54:35 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:38.788937 | orchestrator | 2025-04-10 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:38.789087 | orchestrator | 2025-04-10 00:54:38 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:38.794176 | orchestrator | 2025-04-10 00:54:38 | INFO  | Task 9759e2af-cd83-4f6a-8ea3-150ffe15ceb9 is in state STARTED 2025-04-10 00:54:38.797205 | orchestrator | 2025-04-10 00:54:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:38.804476 | orchestrator | 2025-04-10 00:54:38 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:41.847288 | orchestrator | 2025-04-10 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:41.847393 | orchestrator | 2025-04-10 00:54:41 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:41.849030 | orchestrator | 2025-04-10 00:54:41 | INFO  | Task 9759e2af-cd83-4f6a-8ea3-150ffe15ceb9 is in state STARTED 2025-04-10 00:54:41.850009 | orchestrator | 2025-04-10 00:54:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:41.851584 | orchestrator | 2025-04-10 00:54:41 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:44.910278 | orchestrator | 2025-04-10 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:44.910469 | orchestrator | 2025-04-10 00:54:44 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:44.910613 | orchestrator | 2025-04-10 00:54:44 | INFO  | Task 9759e2af-cd83-4f6a-8ea3-150ffe15ceb9 is in state STARTED 2025-04-10 00:54:44.910636 | orchestrator | 2025-04-10 00:54:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:44.910657 | orchestrator | 2025-04-10 00:54:44 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:47.959660 | orchestrator | 2025-04-10 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:47.959833 | orchestrator | 2025-04-10 00:54:47 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:47.960402 | orchestrator | 2025-04-10 00:54:47 | INFO  | Task 9759e2af-cd83-4f6a-8ea3-150ffe15ceb9 is in state STARTED 2025-04-10 00:54:47.964097 | orchestrator | 2025-04-10 00:54:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:47.968171 | orchestrator | 2025-04-10 00:54:47 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:51.043295 | orchestrator | 2025-04-10 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:51.043469 | orchestrator | 2025-04-10 00:54:51 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:51.043565 | orchestrator | 2025-04-10 00:54:51 | INFO  | Task 9759e2af-cd83-4f6a-8ea3-150ffe15ceb9 is in state SUCCESS 2025-04-10 00:54:51.047697 | orchestrator | 2025-04-10 00:54:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:51.048450 | orchestrator | 2025-04-10 00:54:51 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:54.102752 | orchestrator | 2025-04-10 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:54.102991 | orchestrator | 2025-04-10 00:54:54 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:54.105602 | orchestrator | 2025-04-10 00:54:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:54.107635 | orchestrator | 2025-04-10 00:54:54 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:54:57.149423 | orchestrator | 2025-04-10 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:54:57.149594 | orchestrator | 2025-04-10 00:54:57 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:54:57.150518 | orchestrator | 2025-04-10 00:54:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:54:57.152047 | orchestrator | 2025-04-10 00:54:57 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:00.204905 | orchestrator | 2025-04-10 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:00.205007 | orchestrator | 2025-04-10 00:55:00 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:00.206864 | orchestrator | 2025-04-10 00:55:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:00.209369 | orchestrator | 2025-04-10 00:55:00 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:00.209463 | orchestrator | 2025-04-10 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:03.270784 | orchestrator | 2025-04-10 00:55:03 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:03.271229 | orchestrator | 2025-04-10 00:55:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:03.273329 | orchestrator | 2025-04-10 00:55:03 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:06.346360 | orchestrator | 2025-04-10 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:06.346501 | orchestrator | 2025-04-10 00:55:06 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:06.347357 | orchestrator | 2025-04-10 00:55:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:06.348399 | orchestrator | 2025-04-10 00:55:06 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:06.348546 | orchestrator | 2025-04-10 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:09.399511 | orchestrator | 2025-04-10 00:55:09 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:09.401086 | orchestrator | 2025-04-10 00:55:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:09.402238 | orchestrator | 2025-04-10 00:55:09 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:12.462231 | orchestrator | 2025-04-10 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:12.462358 | orchestrator | 2025-04-10 00:55:12 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:12.465262 | orchestrator | 2025-04-10 00:55:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:15.501841 | orchestrator | 2025-04-10 00:55:12 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:15.502006 | orchestrator | 2025-04-10 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:15.502101 | orchestrator | 2025-04-10 00:55:15 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:15.502458 | orchestrator | 2025-04-10 00:55:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:15.503178 | orchestrator | 2025-04-10 00:55:15 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:15.503253 | orchestrator | 2025-04-10 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:18.555004 | orchestrator | 2025-04-10 00:55:18 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:18.556826 | orchestrator | 2025-04-10 00:55:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:18.561175 | orchestrator | 2025-04-10 00:55:18 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:21.617657 | orchestrator | 2025-04-10 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:21.617830 | orchestrator | 2025-04-10 00:55:21 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:21.618355 | orchestrator | 2025-04-10 00:55:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:21.620052 | orchestrator | 2025-04-10 00:55:21 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:24.687456 | orchestrator | 2025-04-10 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:24.687636 | orchestrator | 2025-04-10 00:55:24 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:24.690105 | orchestrator | 2025-04-10 00:55:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:24.690218 | orchestrator | 2025-04-10 00:55:24 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:24.690243 | orchestrator | 2025-04-10 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:27.733692 | orchestrator | 2025-04-10 00:55:27 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:27.741430 | orchestrator | 2025-04-10 00:55:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:27.744930 | orchestrator | 2025-04-10 00:55:27 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:27.745016 | orchestrator | 2025-04-10 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:30.808298 | orchestrator | 2025-04-10 00:55:30 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:30.810393 | orchestrator | 2025-04-10 00:55:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:30.811239 | orchestrator | 2025-04-10 00:55:30 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:30.811380 | orchestrator | 2025-04-10 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:33.849473 | orchestrator | 2025-04-10 00:55:33 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:33.849835 | orchestrator | 2025-04-10 00:55:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:33.850450 | orchestrator | 2025-04-10 00:55:33 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:33.850536 | orchestrator | 2025-04-10 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:36.899659 | orchestrator | 2025-04-10 00:55:36 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:36.899858 | orchestrator | 2025-04-10 00:55:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:36.900565 | orchestrator | 2025-04-10 00:55:36 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:36.900646 | orchestrator | 2025-04-10 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:39.973662 | orchestrator | 2025-04-10 00:55:39 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:43.005478 | orchestrator | 2025-04-10 00:55:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:43.005604 | orchestrator | 2025-04-10 00:55:39 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:43.005624 | orchestrator | 2025-04-10 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:43.005687 | orchestrator | 2025-04-10 00:55:43 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:43.005770 | orchestrator | 2025-04-10 00:55:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:43.006575 | orchestrator | 2025-04-10 00:55:43 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:46.069649 | orchestrator | 2025-04-10 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:46.069746 | orchestrator | 2025-04-10 00:55:46 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:46.072187 | orchestrator | 2025-04-10 00:55:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:46.075675 | orchestrator | 2025-04-10 00:55:46 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:46.075958 | orchestrator | 2025-04-10 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:49.127996 | orchestrator | 2025-04-10 00:55:49 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:49.129730 | orchestrator | 2025-04-10 00:55:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:49.131368 | orchestrator | 2025-04-10 00:55:49 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:52.193799 | orchestrator | 2025-04-10 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:52.194008 | orchestrator | 2025-04-10 00:55:52 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:52.194141 | orchestrator | 2025-04-10 00:55:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:52.195452 | orchestrator | 2025-04-10 00:55:52 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:55.253653 | orchestrator | 2025-04-10 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:55.253806 | orchestrator | 2025-04-10 00:55:55 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:55.254639 | orchestrator | 2025-04-10 00:55:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:55.256498 | orchestrator | 2025-04-10 00:55:55 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:55:58.303929 | orchestrator | 2025-04-10 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:55:58.304066 | orchestrator | 2025-04-10 00:55:58 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:55:58.306292 | orchestrator | 2025-04-10 00:55:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:55:58.308088 | orchestrator | 2025-04-10 00:55:58 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:01.356199 | orchestrator | 2025-04-10 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:01.356340 | orchestrator | 2025-04-10 00:56:01 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:01.356966 | orchestrator | 2025-04-10 00:56:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:01.357010 | orchestrator | 2025-04-10 00:56:01 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:04.396367 | orchestrator | 2025-04-10 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:04.396507 | orchestrator | 2025-04-10 00:56:04 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:04.398422 | orchestrator | 2025-04-10 00:56:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:07.443145 | orchestrator | 2025-04-10 00:56:04 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:07.443278 | orchestrator | 2025-04-10 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:07.443315 | orchestrator | 2025-04-10 00:56:07 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:07.445387 | orchestrator | 2025-04-10 00:56:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:07.447687 | orchestrator | 2025-04-10 00:56:07 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:10.503899 | orchestrator | 2025-04-10 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:10.504061 | orchestrator | 2025-04-10 00:56:10 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:10.505710 | orchestrator | 2025-04-10 00:56:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:10.507237 | orchestrator | 2025-04-10 00:56:10 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:13.547697 | orchestrator | 2025-04-10 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:13.547844 | orchestrator | 2025-04-10 00:56:13 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:13.555363 | orchestrator | 2025-04-10 00:56:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:13.561420 | orchestrator | 2025-04-10 00:56:13 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:16.618492 | orchestrator | 2025-04-10 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:16.618630 | orchestrator | 2025-04-10 00:56:16 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:16.619339 | orchestrator | 2025-04-10 00:56:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:16.620275 | orchestrator | 2025-04-10 00:56:16 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:16.620647 | orchestrator | 2025-04-10 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:19.669927 | orchestrator | 2025-04-10 00:56:19 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:19.672185 | orchestrator | 2025-04-10 00:56:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:19.673048 | orchestrator | 2025-04-10 00:56:19 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:22.711711 | orchestrator | 2025-04-10 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:22.711853 | orchestrator | 2025-04-10 00:56:22 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:22.712545 | orchestrator | 2025-04-10 00:56:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:22.714178 | orchestrator | 2025-04-10 00:56:22 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:25.751030 | orchestrator | 2025-04-10 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:25.751162 | orchestrator | 2025-04-10 00:56:25 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:25.751284 | orchestrator | 2025-04-10 00:56:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:25.751965 | orchestrator | 2025-04-10 00:56:25 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:28.790422 | orchestrator | 2025-04-10 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:28.790564 | orchestrator | 2025-04-10 00:56:28 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:28.793857 | orchestrator | 2025-04-10 00:56:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:28.796331 | orchestrator | 2025-04-10 00:56:28 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:31.840369 | orchestrator | 2025-04-10 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:31.840469 | orchestrator | 2025-04-10 00:56:31 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:31.841698 | orchestrator | 2025-04-10 00:56:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:31.843652 | orchestrator | 2025-04-10 00:56:31 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:34.897218 | orchestrator | 2025-04-10 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:34.897357 | orchestrator | 2025-04-10 00:56:34 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:34.898434 | orchestrator | 2025-04-10 00:56:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:34.900797 | orchestrator | 2025-04-10 00:56:34 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:37.947498 | orchestrator | 2025-04-10 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:37.947652 | orchestrator | 2025-04-10 00:56:37 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:37.949231 | orchestrator | 2025-04-10 00:56:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:37.950954 | orchestrator | 2025-04-10 00:56:37 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:37.951613 | orchestrator | 2025-04-10 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:41.021076 | orchestrator | 2025-04-10 00:56:41 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:41.024092 | orchestrator | 2025-04-10 00:56:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:41.026497 | orchestrator | 2025-04-10 00:56:41 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:44.081814 | orchestrator | 2025-04-10 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:44.082089 | orchestrator | 2025-04-10 00:56:44 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:47.133325 | orchestrator | 2025-04-10 00:56:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:47.133414 | orchestrator | 2025-04-10 00:56:44 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:47.133422 | orchestrator | 2025-04-10 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:47.133439 | orchestrator | 2025-04-10 00:56:47 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:47.134836 | orchestrator | 2025-04-10 00:56:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:47.137033 | orchestrator | 2025-04-10 00:56:47 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:47.137061 | orchestrator | 2025-04-10 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:50.189699 | orchestrator | 2025-04-10 00:56:50 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:53.238533 | orchestrator | 2025-04-10 00:56:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:53.238656 | orchestrator | 2025-04-10 00:56:50 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:53.238677 | orchestrator | 2025-04-10 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:53.238712 | orchestrator | 2025-04-10 00:56:53 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:53.239061 | orchestrator | 2025-04-10 00:56:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:53.239962 | orchestrator | 2025-04-10 00:56:53 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:53.241081 | orchestrator | 2025-04-10 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:56.299049 | orchestrator | 2025-04-10 00:56:56 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state STARTED 2025-04-10 00:56:59.335293 | orchestrator | 2025-04-10 00:56:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:56:59.335390 | orchestrator | 2025-04-10 00:56:56 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:56:59.335404 | orchestrator | 2025-04-10 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:56:59.335428 | orchestrator | 2025-04-10 00:56:59 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:56:59.342439 | orchestrator | 2025-04-10 00:56:59 | INFO  | Task c1c4b355-6b76-4ec6-b936-f82c1acf37f7 is in state SUCCESS 2025-04-10 00:56:59.345106 | orchestrator | 2025-04-10 00:56:59.345145 | orchestrator | None 2025-04-10 00:56:59.345155 | orchestrator | 2025-04-10 00:56:59.345165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:56:59.345175 | orchestrator | 2025-04-10 00:56:59.345184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:56:59.345194 | orchestrator | Thursday 10 April 2025 00:49:02 +0000 (0:00:00.521) 0:00:00.521 ******** 2025-04-10 00:56:59.345204 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.345215 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.345224 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.345234 | orchestrator | 2025-04-10 00:56:59.345244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:56:59.345253 | orchestrator | Thursday 10 April 2025 00:49:02 +0000 (0:00:00.669) 0:00:01.190 ******** 2025-04-10 00:56:59.345263 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-10 00:56:59.345273 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-10 00:56:59.345283 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-10 00:56:59.345292 | orchestrator | 2025-04-10 00:56:59.345302 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-10 00:56:59.345311 | orchestrator | 2025-04-10 00:56:59.345320 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-10 00:56:59.345330 | orchestrator | Thursday 10 April 2025 00:49:03 +0000 (0:00:00.542) 0:00:01.733 ******** 2025-04-10 00:56:59.345340 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.345350 | orchestrator | 2025-04-10 00:56:59.345385 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-10 00:56:59.345395 | orchestrator | Thursday 10 April 2025 00:49:04 +0000 (0:00:01.030) 0:00:02.763 ******** 2025-04-10 00:56:59.345404 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.345414 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.345423 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.345432 | orchestrator | 2025-04-10 00:56:59.345442 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-10 00:56:59.345451 | orchestrator | Thursday 10 April 2025 00:49:05 +0000 (0:00:01.481) 0:00:04.245 ******** 2025-04-10 00:56:59.345460 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.345470 | orchestrator | 2025-04-10 00:56:59.345479 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-10 00:56:59.345489 | orchestrator | Thursday 10 April 2025 00:49:07 +0000 (0:00:01.673) 0:00:05.918 ******** 2025-04-10 00:56:59.345498 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.345507 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.345517 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.345526 | orchestrator | 2025-04-10 00:56:59.345536 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-10 00:56:59.345545 | orchestrator | Thursday 10 April 2025 00:49:10 +0000 (0:00:02.654) 0:00:08.573 ******** 2025-04-10 00:56:59.345554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-10 00:56:59.345564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-10 00:56:59.345574 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-10 00:56:59.345584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-10 00:56:59.345593 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-10 00:56:59.345603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-10 00:56:59.345616 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-10 00:56:59.345626 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-10 00:56:59.345636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-10 00:56:59.345646 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-10 00:56:59.345655 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-10 00:56:59.345664 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-10 00:56:59.345674 | orchestrator | 2025-04-10 00:56:59.345683 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-10 00:56:59.345693 | orchestrator | Thursday 10 April 2025 00:49:13 +0000 (0:00:03.555) 0:00:12.129 ******** 2025-04-10 00:56:59.345702 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-10 00:56:59.345717 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-10 00:56:59.345728 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-10 00:56:59.345739 | orchestrator | 2025-04-10 00:56:59.345750 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-10 00:56:59.345760 | orchestrator | Thursday 10 April 2025 00:49:14 +0000 (0:00:01.220) 0:00:13.349 ******** 2025-04-10 00:56:59.345771 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-10 00:56:59.345782 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-10 00:56:59.345792 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-10 00:56:59.345802 | orchestrator | 2025-04-10 00:56:59.345813 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-10 00:56:59.345828 | orchestrator | Thursday 10 April 2025 00:49:16 +0000 (0:00:01.713) 0:00:15.063 ******** 2025-04-10 00:56:59.345838 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-10 00:56:59.345849 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.345886 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-10 00:56:59.345898 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.345909 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-10 00:56:59.345919 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.345929 | orchestrator | 2025-04-10 00:56:59.345938 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-10 00:56:59.345948 | orchestrator | Thursday 10 April 2025 00:49:17 +0000 (0:00:00.555) 0:00:15.618 ******** 2025-04-10 00:56:59.345958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.345972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.345982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.345992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.346100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.346110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.346120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346155 | orchestrator | 2025-04-10 00:56:59.346165 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-10 00:56:59.346174 | orchestrator | Thursday 10 April 2025 00:49:20 +0000 (0:00:02.942) 0:00:18.560 ******** 2025-04-10 00:56:59.346184 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.346194 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.346203 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.346213 | orchestrator | 2025-04-10 00:56:59.346226 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-10 00:56:59.346236 | orchestrator | Thursday 10 April 2025 00:49:22 +0000 (0:00:02.099) 0:00:20.659 ******** 2025-04-10 00:56:59.346246 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-10 00:56:59.346255 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-10 00:56:59.346265 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-10 00:56:59.346274 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-10 00:56:59.346283 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-10 00:56:59.346293 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-10 00:56:59.346302 | orchestrator | 2025-04-10 00:56:59.346312 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-10 00:56:59.346321 | orchestrator | Thursday 10 April 2025 00:49:25 +0000 (0:00:03.485) 0:00:24.145 ******** 2025-04-10 00:56:59.346331 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.346340 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.346350 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.346359 | orchestrator | 2025-04-10 00:56:59.346368 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-10 00:56:59.346378 | orchestrator | Thursday 10 April 2025 00:49:27 +0000 (0:00:02.065) 0:00:26.213 ******** 2025-04-10 00:56:59.346387 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.346397 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.346407 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.346416 | orchestrator | 2025-04-10 00:56:59.346426 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-10 00:56:59.346435 | orchestrator | Thursday 10 April 2025 00:49:31 +0000 (0:00:03.815) 0:00:30.028 ******** 2025-04-10 00:56:59.346445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.346456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.346470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.346480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.346495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.346505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.346515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.346525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.346535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346549 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.346558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.346568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346578 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.346592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346602 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.346612 | orchestrator | 2025-04-10 00:56:59.346621 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-10 00:56:59.346631 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:03.725) 0:00:33.754 ******** 2025-04-10 00:56:59.346640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.346697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.346740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.346750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346776 | orchestrator | 2025-04-10 00:56:59.346785 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-10 00:56:59.346795 | orchestrator | Thursday 10 April 2025 00:49:41 +0000 (0:00:06.059) 0:00:39.814 ******** 2025-04-10 00:56:59.346808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.346925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.346936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.346964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.346984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.346994 | orchestrator | 2025-04-10 00:56:59.347003 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-10 00:56:59.347013 | orchestrator | Thursday 10 April 2025 00:49:45 +0000 (0:00:03.612) 0:00:43.426 ******** 2025-04-10 00:56:59.347027 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-10 00:56:59.347040 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-10 00:56:59.347050 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-10 00:56:59.347060 | orchestrator | 2025-04-10 00:56:59.347069 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-10 00:56:59.347079 | orchestrator | Thursday 10 April 2025 00:49:48 +0000 (0:00:03.499) 0:00:46.926 ******** 2025-04-10 00:56:59.347088 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-10 00:56:59.347098 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-10 00:56:59.347107 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-10 00:56:59.347121 | orchestrator | 2025-04-10 00:56:59.347131 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-10 00:56:59.347140 | orchestrator | Thursday 10 April 2025 00:49:54 +0000 (0:00:06.085) 0:00:53.012 ******** 2025-04-10 00:56:59.347149 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.347159 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.347168 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.347178 | orchestrator | 2025-04-10 00:56:59.347187 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-10 00:56:59.347196 | orchestrator | Thursday 10 April 2025 00:49:55 +0000 (0:00:01.310) 0:00:54.322 ******** 2025-04-10 00:56:59.347205 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-10 00:56:59.347216 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-10 00:56:59.347225 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-10 00:56:59.347235 | orchestrator | 2025-04-10 00:56:59.347244 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-10 00:56:59.347254 | orchestrator | Thursday 10 April 2025 00:49:59 +0000 (0:00:03.247) 0:00:57.570 ******** 2025-04-10 00:56:59.347263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-10 00:56:59.347273 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-10 00:56:59.347282 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-10 00:56:59.347292 | orchestrator | 2025-04-10 00:56:59.347301 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-10 00:56:59.347310 | orchestrator | Thursday 10 April 2025 00:50:02 +0000 (0:00:03.391) 0:01:00.962 ******** 2025-04-10 00:56:59.347320 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-10 00:56:59.347335 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-10 00:56:59.347344 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-10 00:56:59.347354 | orchestrator | 2025-04-10 00:56:59.347363 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-10 00:56:59.347372 | orchestrator | Thursday 10 April 2025 00:50:05 +0000 (0:00:02.499) 0:01:03.461 ******** 2025-04-10 00:56:59.347382 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-10 00:56:59.347392 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-10 00:56:59.347402 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-10 00:56:59.347412 | orchestrator | 2025-04-10 00:56:59.347421 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-10 00:56:59.347430 | orchestrator | Thursday 10 April 2025 00:50:07 +0000 (0:00:02.618) 0:01:06.079 ******** 2025-04-10 00:56:59.347440 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.347449 | orchestrator | 2025-04-10 00:56:59.347459 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-10 00:56:59.347468 | orchestrator | Thursday 10 April 2025 00:50:08 +0000 (0:00:00.986) 0:01:07.066 ******** 2025-04-10 00:56:59.347481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.347508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.347519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.347529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.347539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.347549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.347561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.347583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.347594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.347603 | orchestrator | 2025-04-10 00:56:59.347613 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-10 00:56:59.347622 | orchestrator | Thursday 10 April 2025 00:50:12 +0000 (0:00:03.941) 0:01:11.007 ******** 2025-04-10 00:56:59.347632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.347642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.347651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.347661 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.347670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.347687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.347703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.347713 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.347722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.347732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.347742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.347751 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.347761 | orchestrator | 2025-04-10 00:56:59.347770 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-10 00:56:59.347780 | orchestrator | Thursday 10 April 2025 00:50:13 +0000 (0:00:00.665) 0:01:11.672 ******** 2025-04-10 00:56:59.347789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.347809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.347823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.347833 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.347843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.347853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.347907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.347917 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.347927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-10 00:56:59.347947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-10 00:56:59.347957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-10 00:56:59.347967 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.347977 | orchestrator | 2025-04-10 00:56:59.347986 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-10 00:56:59.348000 | orchestrator | Thursday 10 April 2025 00:50:14 +0000 (0:00:01.275) 0:01:12.948 ******** 2025-04-10 00:56:59.348010 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-10 00:56:59.348020 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-10 00:56:59.348029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-10 00:56:59.348039 | orchestrator | 2025-04-10 00:56:59.348048 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-10 00:56:59.348057 | orchestrator | Thursday 10 April 2025 00:50:17 +0000 (0:00:03.072) 0:01:16.020 ******** 2025-04-10 00:56:59.348067 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-10 00:56:59.348076 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-10 00:56:59.348085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-10 00:56:59.348094 | orchestrator | 2025-04-10 00:56:59.348103 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-10 00:56:59.348113 | orchestrator | Thursday 10 April 2025 00:50:20 +0000 (0:00:02.461) 0:01:18.481 ******** 2025-04-10 00:56:59.348122 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-10 00:56:59.348131 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-10 00:56:59.348141 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-10 00:56:59.348150 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-10 00:56:59.348159 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.348168 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-10 00:56:59.348178 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.348187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-10 00:56:59.348196 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.348206 | orchestrator | 2025-04-10 00:56:59.348215 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-10 00:56:59.348224 | orchestrator | Thursday 10 April 2025 00:50:21 +0000 (0:00:01.819) 0:01:20.301 ******** 2025-04-10 00:56:59.348238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.348248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.348258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-10 00:56:59.348272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.348285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.348295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-10 00:56:59.348310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.348319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.348329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.348344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.348354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-10 00:56:59.348364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275', '__omit_place_holder__9e7203cbee84e7c02bd930ac4e912ebf4158b275'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-10 00:56:59.348373 | orchestrator | 2025-04-10 00:56:59.348383 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-10 00:56:59.348392 | orchestrator | Thursday 10 April 2025 00:50:25 +0000 (0:00:03.246) 0:01:23.547 ******** 2025-04-10 00:56:59.348406 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.348415 | orchestrator | 2025-04-10 00:56:59.348425 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-10 00:56:59.348433 | orchestrator | Thursday 10 April 2025 00:50:26 +0000 (0:00:01.007) 0:01:24.555 ******** 2025-04-10 00:56:59.348442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-10 00:56:59.348453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.348462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-10 00:56:59.348509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.348522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-10 00:56:59.348561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.348570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348591 | orchestrator | 2025-04-10 00:56:59.348600 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-10 00:56:59.348609 | orchestrator | Thursday 10 April 2025 00:50:32 +0000 (0:00:06.139) 0:01:30.694 ******** 2025-04-10 00:56:59.348618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-10 00:56:59.348627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.348641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348664 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.348673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-10 00:56:59.348686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.348695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348713 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.348727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-10 00:56:59.348742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.348751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.348773 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.348785 | orchestrator | 2025-04-10 00:56:59.348794 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-10 00:56:59.348803 | orchestrator | Thursday 10 April 2025 00:50:33 +0000 (0:00:00.743) 0:01:31.438 ******** 2025-04-10 00:56:59.348812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-10 00:56:59.348821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-10 00:56:59.348830 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.348839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-10 00:56:59.348848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-10 00:56:59.348872 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.348882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-10 00:56:59.348891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-10 00:56:59.348900 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.348908 | orchestrator | 2025-04-10 00:56:59.348917 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-10 00:56:59.348926 | orchestrator | Thursday 10 April 2025 00:50:34 +0000 (0:00:01.115) 0:01:32.554 ******** 2025-04-10 00:56:59.348934 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.348943 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.348952 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.348960 | orchestrator | 2025-04-10 00:56:59.348969 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-10 00:56:59.348977 | orchestrator | Thursday 10 April 2025 00:50:35 +0000 (0:00:01.424) 0:01:33.978 ******** 2025-04-10 00:56:59.348986 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.348995 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.349003 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.349012 | orchestrator | 2025-04-10 00:56:59.349020 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-10 00:56:59.349029 | orchestrator | Thursday 10 April 2025 00:50:38 +0000 (0:00:02.458) 0:01:36.437 ******** 2025-04-10 00:56:59.349038 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.349046 | orchestrator | 2025-04-10 00:56:59.349055 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-10 00:56:59.349068 | orchestrator | Thursday 10 April 2025 00:50:39 +0000 (0:00:01.213) 0:01:37.650 ******** 2025-04-10 00:56:59.349091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.349102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.349130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.349153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349190 | orchestrator | 2025-04-10 00:56:59.349199 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-10 00:56:59.349208 | orchestrator | Thursday 10 April 2025 00:50:44 +0000 (0:00:05.267) 0:01:42.918 ******** 2025-04-10 00:56:59.349216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.349242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349261 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.349270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.349279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349301 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.349314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.349329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.349347 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.349356 | orchestrator | 2025-04-10 00:56:59.349365 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-10 00:56:59.349374 | orchestrator | Thursday 10 April 2025 00:50:45 +0000 (0:00:01.095) 0:01:44.014 ******** 2025-04-10 00:56:59.349382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-10 00:56:59.349391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-10 00:56:59.349400 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.349409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-10 00:56:59.349421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-10 00:56:59.349433 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.349442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-10 00:56:59.349451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-10 00:56:59.349460 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.349469 | orchestrator | 2025-04-10 00:56:59.349477 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-10 00:56:59.349486 | orchestrator | Thursday 10 April 2025 00:50:47 +0000 (0:00:02.172) 0:01:46.186 ******** 2025-04-10 00:56:59.349494 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.349503 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.349512 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.349520 | orchestrator | 2025-04-10 00:56:59.349529 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-10 00:56:59.349537 | orchestrator | Thursday 10 April 2025 00:50:49 +0000 (0:00:01.667) 0:01:47.854 ******** 2025-04-10 00:56:59.349546 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.349554 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.349563 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.349572 | orchestrator | 2025-04-10 00:56:59.349580 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-10 00:56:59.349589 | orchestrator | Thursday 10 April 2025 00:50:51 +0000 (0:00:02.273) 0:01:50.127 ******** 2025-04-10 00:56:59.349597 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.349606 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.349615 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.349623 | orchestrator | 2025-04-10 00:56:59.349636 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-10 00:56:59.350463 | orchestrator | Thursday 10 April 2025 00:50:52 +0000 (0:00:00.300) 0:01:50.427 ******** 2025-04-10 00:56:59.350493 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.350501 | orchestrator | 2025-04-10 00:56:59.350510 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-10 00:56:59.350518 | orchestrator | Thursday 10 April 2025 00:50:53 +0000 (0:00:00.978) 0:01:51.406 ******** 2025-04-10 00:56:59.350527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-10 00:56:59.350543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-10 00:56:59.350564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-10 00:56:59.350573 | orchestrator | 2025-04-10 00:56:59.350581 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-10 00:56:59.350589 | orchestrator | Thursday 10 April 2025 00:50:56 +0000 (0:00:03.274) 0:01:54.681 ******** 2025-04-10 00:56:59.350597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-10 00:56:59.350606 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.350621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-10 00:56:59.350630 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.350644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-10 00:56:59.350653 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.350665 | orchestrator | 2025-04-10 00:56:59.350673 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-10 00:56:59.350681 | orchestrator | Thursday 10 April 2025 00:50:58 +0000 (0:00:01.769) 0:01:56.450 ******** 2025-04-10 00:56:59.350691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-10 00:56:59.350699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-10 00:56:59.350708 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.350716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-10 00:56:59.350724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-10 00:56:59.350732 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.350740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-10 00:56:59.350754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-10 00:56:59.350763 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.350771 | orchestrator | 2025-04-10 00:56:59.350779 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-10 00:56:59.350787 | orchestrator | Thursday 10 April 2025 00:51:00 +0000 (0:00:02.398) 0:01:58.848 ******** 2025-04-10 00:56:59.350795 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.350803 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.350810 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.350818 | orchestrator | 2025-04-10 00:56:59.350826 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-10 00:56:59.350834 | orchestrator | Thursday 10 April 2025 00:51:01 +0000 (0:00:00.826) 0:01:59.675 ******** 2025-04-10 00:56:59.350842 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.350850 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.350876 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.350896 | orchestrator | 2025-04-10 00:56:59.350905 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-10 00:56:59.350913 | orchestrator | Thursday 10 April 2025 00:51:02 +0000 (0:00:01.436) 0:02:01.112 ******** 2025-04-10 00:56:59.350921 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.350929 | orchestrator | 2025-04-10 00:56:59.350937 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-10 00:56:59.350945 | orchestrator | Thursday 10 April 2025 00:51:03 +0000 (0:00:00.910) 0:02:02.023 ******** 2025-04-10 00:56:59.350953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.350962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.350971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.350984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.350992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.351052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351090 | orchestrator | 2025-04-10 00:56:59.351099 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-10 00:56:59.351114 | orchestrator | Thursday 10 April 2025 00:51:09 +0000 (0:00:05.529) 0:02:07.552 ******** 2025-04-10 00:56:59.351124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.351133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351176 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.351185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.351194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351244 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.351254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.351263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351297 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.351306 | orchestrator | 2025-04-10 00:56:59.351316 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-10 00:56:59.351329 | orchestrator | Thursday 10 April 2025 00:51:10 +0000 (0:00:01.422) 0:02:08.975 ******** 2025-04-10 00:56:59.351338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-10 00:56:59.351351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-10 00:56:59.351362 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.351371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-10 00:56:59.351380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-10 00:56:59.351389 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.351398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-10 00:56:59.351407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-10 00:56:59.351416 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.351424 | orchestrator | 2025-04-10 00:56:59.351432 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-10 00:56:59.351440 | orchestrator | Thursday 10 April 2025 00:51:12 +0000 (0:00:01.487) 0:02:10.462 ******** 2025-04-10 00:56:59.351447 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.351455 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.351463 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.351471 | orchestrator | 2025-04-10 00:56:59.351479 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-10 00:56:59.351487 | orchestrator | Thursday 10 April 2025 00:51:13 +0000 (0:00:01.522) 0:02:11.985 ******** 2025-04-10 00:56:59.351495 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.351503 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.351511 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.351519 | orchestrator | 2025-04-10 00:56:59.351526 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-10 00:56:59.351534 | orchestrator | Thursday 10 April 2025 00:51:15 +0000 (0:00:02.384) 0:02:14.370 ******** 2025-04-10 00:56:59.351542 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.351550 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.351558 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.351569 | orchestrator | 2025-04-10 00:56:59.351577 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-10 00:56:59.351585 | orchestrator | Thursday 10 April 2025 00:51:16 +0000 (0:00:00.303) 0:02:14.673 ******** 2025-04-10 00:56:59.351593 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.351601 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.351609 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.351617 | orchestrator | 2025-04-10 00:56:59.351625 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-10 00:56:59.351633 | orchestrator | Thursday 10 April 2025 00:51:16 +0000 (0:00:00.531) 0:02:15.204 ******** 2025-04-10 00:56:59.351641 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.351649 | orchestrator | 2025-04-10 00:56:59.351657 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-10 00:56:59.351669 | orchestrator | Thursday 10 April 2025 00:51:17 +0000 (0:00:01.087) 0:02:16.292 ******** 2025-04-10 00:56:59.351678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 00:56:59.351689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 00:56:59.351698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 00:56:59.351743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 00:56:59.351772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 00:56:59.351835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 00:56:59.351849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351915 | orchestrator | 2025-04-10 00:56:59.351926 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-10 00:56:59.351935 | orchestrator | Thursday 10 April 2025 00:51:23 +0000 (0:00:05.191) 0:02:21.484 ******** 2025-04-10 00:56:59.351943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 00:56:59.351959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 00:56:59.351968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.351989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 00:56:59.352032 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.352041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 00:56:59.352053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352104 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.352112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 00:56:59.352125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 00:56:59.352134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.352189 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.352197 | orchestrator | 2025-04-10 00:56:59.352205 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-10 00:56:59.352213 | orchestrator | Thursday 10 April 2025 00:51:24 +0000 (0:00:01.117) 0:02:22.601 ******** 2025-04-10 00:56:59.352221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-10 00:56:59.352229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-10 00:56:59.352238 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.352246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-10 00:56:59.352254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-10 00:56:59.352262 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.352270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-10 00:56:59.352278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-10 00:56:59.352286 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.352294 | orchestrator | 2025-04-10 00:56:59.352302 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-10 00:56:59.352310 | orchestrator | Thursday 10 April 2025 00:51:25 +0000 (0:00:01.437) 0:02:24.039 ******** 2025-04-10 00:56:59.352318 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.352326 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.352334 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.352342 | orchestrator | 2025-04-10 00:56:59.352350 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-10 00:56:59.352357 | orchestrator | Thursday 10 April 2025 00:51:26 +0000 (0:00:01.309) 0:02:25.348 ******** 2025-04-10 00:56:59.352365 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.352373 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.352381 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.352389 | orchestrator | 2025-04-10 00:56:59.352397 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-10 00:56:59.352405 | orchestrator | Thursday 10 April 2025 00:51:29 +0000 (0:00:02.236) 0:02:27.585 ******** 2025-04-10 00:56:59.352413 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.352421 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.352429 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.352437 | orchestrator | 2025-04-10 00:56:59.352445 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-10 00:56:59.352456 | orchestrator | Thursday 10 April 2025 00:51:29 +0000 (0:00:00.758) 0:02:28.343 ******** 2025-04-10 00:56:59.352464 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.352472 | orchestrator | 2025-04-10 00:56:59.352480 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-10 00:56:59.352492 | orchestrator | Thursday 10 April 2025 00:51:31 +0000 (0:00:01.417) 0:02:29.760 ******** 2025-04-10 00:56:59.352501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 00:56:59.352515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.352532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 00:56:59.352548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 00:56:59.352567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.352581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.352594 | orchestrator | 2025-04-10 00:56:59.352603 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-10 00:56:59.352611 | orchestrator | Thursday 10 April 2025 00:51:38 +0000 (0:00:06.992) 0:02:36.753 ******** 2025-04-10 00:56:59.352623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 00:56:59.352638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.352652 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.352665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 00:56:59.352678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.352692 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.352700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 00:56:59.352722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.352732 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.352740 | orchestrator | 2025-04-10 00:56:59.352748 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-10 00:56:59.352759 | orchestrator | Thursday 10 April 2025 00:51:44 +0000 (0:00:05.807) 0:02:42.560 ******** 2025-04-10 00:56:59.352768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-10 00:56:59.352777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-10 00:56:59.352785 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.352794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-10 00:56:59.352809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-10 00:56:59.353227 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.353247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-10 00:56:59.353256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-10 00:56:59.353265 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.353273 | orchestrator | 2025-04-10 00:56:59.353281 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-10 00:56:59.353289 | orchestrator | Thursday 10 April 2025 00:51:51 +0000 (0:00:07.242) 0:02:49.802 ******** 2025-04-10 00:56:59.353297 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.353305 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.353313 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.353321 | orchestrator | 2025-04-10 00:56:59.353329 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-10 00:56:59.353337 | orchestrator | Thursday 10 April 2025 00:51:52 +0000 (0:00:01.382) 0:02:51.185 ******** 2025-04-10 00:56:59.353346 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.353353 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.353361 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.353370 | orchestrator | 2025-04-10 00:56:59.353378 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-10 00:56:59.353386 | orchestrator | Thursday 10 April 2025 00:51:54 +0000 (0:00:02.149) 0:02:53.334 ******** 2025-04-10 00:56:59.353394 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.353403 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.353411 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.353419 | orchestrator | 2025-04-10 00:56:59.353427 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-10 00:56:59.353434 | orchestrator | Thursday 10 April 2025 00:51:55 +0000 (0:00:00.545) 0:02:53.880 ******** 2025-04-10 00:56:59.353442 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.353450 | orchestrator | 2025-04-10 00:56:59.353458 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-10 00:56:59.353466 | orchestrator | Thursday 10 April 2025 00:51:56 +0000 (0:00:01.262) 0:02:55.142 ******** 2025-04-10 00:56:59.353475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 00:56:59.353491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 00:56:59.353510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 00:56:59.353519 | orchestrator | 2025-04-10 00:56:59.353527 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-10 00:56:59.353535 | orchestrator | Thursday 10 April 2025 00:52:00 +0000 (0:00:03.929) 0:02:59.072 ******** 2025-04-10 00:56:59.353543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 00:56:59.353563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 00:56:59.353572 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.353580 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.353588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 00:56:59.353601 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.353609 | orchestrator | 2025-04-10 00:56:59.353617 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-10 00:56:59.353625 | orchestrator | Thursday 10 April 2025 00:52:01 +0000 (0:00:00.426) 0:02:59.499 ******** 2025-04-10 00:56:59.353633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-10 00:56:59.353646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-10 00:56:59.353654 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.353662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-10 00:56:59.353671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-10 00:56:59.353679 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.353687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-10 00:56:59.353699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-10 00:56:59.353707 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.353715 | orchestrator | 2025-04-10 00:56:59.353723 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-10 00:56:59.353731 | orchestrator | Thursday 10 April 2025 00:52:02 +0000 (0:00:00.963) 0:03:00.462 ******** 2025-04-10 00:56:59.353739 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.353747 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.353755 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.353763 | orchestrator | 2025-04-10 00:56:59.353771 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-10 00:56:59.353779 | orchestrator | Thursday 10 April 2025 00:52:03 +0000 (0:00:01.109) 0:03:01.572 ******** 2025-04-10 00:56:59.353786 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.353794 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.353802 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.353810 | orchestrator | 2025-04-10 00:56:59.353818 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-10 00:56:59.353826 | orchestrator | Thursday 10 April 2025 00:52:05 +0000 (0:00:02.206) 0:03:03.778 ******** 2025-04-10 00:56:59.353834 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.353842 | orchestrator | 2025-04-10 00:56:59.353850 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-04-10 00:56:59.353874 | orchestrator | Thursday 10 April 2025 00:52:06 +0000 (0:00:01.253) 0:03:05.032 ******** 2025-04-10 00:56:59.353885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.353899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.353909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.353923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.353933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.353949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.353963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.353973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.353982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.353991 | orchestrator | 2025-04-10 00:56:59.354003 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-04-10 00:56:59.354041 | orchestrator | Thursday 10 April 2025 00:52:13 +0000 (0:00:06.947) 0:03:11.980 ******** 2025-04-10 00:56:59.354052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.354070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.354085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.354094 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.354104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.354117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.354127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.354136 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.354155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.354165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.354174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.354184 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.354193 | orchestrator | 2025-04-10 00:56:59.354203 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-04-10 00:56:59.354212 | orchestrator | Thursday 10 April 2025 00:52:14 +0000 (0:00:01.120) 0:03:13.100 ******** 2025-04-10 00:56:59.354221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354261 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.354273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354310 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.354318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-10 00:56:59.354350 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.354361 | orchestrator | 2025-04-10 00:56:59.354370 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-04-10 00:56:59.354434 | orchestrator | Thursday 10 April 2025 00:52:16 +0000 (0:00:01.366) 0:03:14.466 ******** 2025-04-10 00:56:59.354443 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.354452 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.354460 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.354468 | orchestrator | 2025-04-10 00:56:59.354476 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-04-10 00:56:59.354484 | orchestrator | Thursday 10 April 2025 00:52:17 +0000 (0:00:01.491) 0:03:15.957 ******** 2025-04-10 00:56:59.354492 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.354500 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.354507 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.354516 | orchestrator | 2025-04-10 00:56:59.354527 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-10 00:56:59.354535 | orchestrator | Thursday 10 April 2025 00:52:19 +0000 (0:00:02.395) 0:03:18.352 ******** 2025-04-10 00:56:59.354543 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.354551 | orchestrator | 2025-04-10 00:56:59.354559 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-10 00:56:59.354567 | orchestrator | Thursday 10 April 2025 00:52:21 +0000 (0:00:01.172) 0:03:19.525 ******** 2025-04-10 00:56:59.354581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 00:56:59.354595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 00:56:59.354610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 00:56:59.354623 | orchestrator | 2025-04-10 00:56:59.354631 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-10 00:56:59.354640 | orchestrator | Thursday 10 April 2025 00:52:26 +0000 (0:00:05.213) 0:03:24.738 ******** 2025-04-10 00:56:59.354648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 00:56:59.354662 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.354683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 00:56:59.354692 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.354701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 00:56:59.354713 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.354721 | orchestrator | 2025-04-10 00:56:59.354732 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-10 00:56:59.354741 | orchestrator | Thursday 10 April 2025 00:52:27 +0000 (0:00:00.962) 0:03:25.700 ******** 2025-04-10 00:56:59.354750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-10 00:56:59.354759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-10 00:56:59.354769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-10 00:56:59.354778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-10 00:56:59.354786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-10 00:56:59.354795 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.354807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-10 00:56:59.354815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-10 00:56:59.354824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-10 00:56:59.354832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-10 00:56:59.354844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-10 00:56:59.354852 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.354967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-10 00:56:59.354991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-10 00:56:59.355007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-10 00:56:59.355016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-10 00:56:59.355025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-10 00:56:59.355033 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355041 | orchestrator | 2025-04-10 00:56:59.355049 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-10 00:56:59.355057 | orchestrator | Thursday 10 April 2025 00:52:28 +0000 (0:00:01.344) 0:03:27.044 ******** 2025-04-10 00:56:59.355066 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.355073 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.355081 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.355090 | orchestrator | 2025-04-10 00:56:59.355098 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-10 00:56:59.355106 | orchestrator | Thursday 10 April 2025 00:52:30 +0000 (0:00:01.576) 0:03:28.621 ******** 2025-04-10 00:56:59.355114 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.355122 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.355129 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.355137 | orchestrator | 2025-04-10 00:56:59.355145 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-10 00:56:59.355153 | orchestrator | Thursday 10 April 2025 00:52:32 +0000 (0:00:02.395) 0:03:31.016 ******** 2025-04-10 00:56:59.355161 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.355169 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.355177 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355185 | orchestrator | 2025-04-10 00:56:59.355193 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-10 00:56:59.355202 | orchestrator | Thursday 10 April 2025 00:52:33 +0000 (0:00:00.489) 0:03:31.506 ******** 2025-04-10 00:56:59.355209 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.355217 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.355225 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355233 | orchestrator | 2025-04-10 00:56:59.355241 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-10 00:56:59.355249 | orchestrator | Thursday 10 April 2025 00:52:33 +0000 (0:00:00.290) 0:03:31.796 ******** 2025-04-10 00:56:59.355257 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.355272 | orchestrator | 2025-04-10 00:56:59.355280 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-10 00:56:59.355288 | orchestrator | Thursday 10 April 2025 00:52:34 +0000 (0:00:01.290) 0:03:33.087 ******** 2025-04-10 00:56:59.355297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 00:56:59.355307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 00:56:59.355319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 00:56:59.355329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 00:56:59.355338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 00:56:59.355351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 00:56:59.355360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 00:56:59.355372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 00:56:59.355381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 00:56:59.355389 | orchestrator | 2025-04-10 00:56:59.355398 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-10 00:56:59.355406 | orchestrator | Thursday 10 April 2025 00:52:39 +0000 (0:00:04.386) 0:03:37.473 ******** 2025-04-10 00:56:59.355414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 00:56:59.355429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 00:56:59.355436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 00:56:59.355443 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.355454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 00:56:59.355462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 00:56:59.355469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 00:56:59.355481 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.355488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 00:56:59.355496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 00:56:59.355504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 00:56:59.355511 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355518 | orchestrator | 2025-04-10 00:56:59.355525 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-10 00:56:59.355532 | orchestrator | Thursday 10 April 2025 00:52:40 +0000 (0:00:00.991) 0:03:38.464 ******** 2025-04-10 00:56:59.355542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-10 00:56:59.355550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-10 00:56:59.355557 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.355565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-10 00:56:59.355572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-10 00:56:59.355583 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.355591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-10 00:56:59.355598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-10 00:56:59.355605 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355612 | orchestrator | 2025-04-10 00:56:59.355619 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-10 00:56:59.355626 | orchestrator | Thursday 10 April 2025 00:52:41 +0000 (0:00:01.096) 0:03:39.561 ******** 2025-04-10 00:56:59.355633 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.355640 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.355647 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.355653 | orchestrator | 2025-04-10 00:56:59.355660 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-10 00:56:59.355667 | orchestrator | Thursday 10 April 2025 00:52:42 +0000 (0:00:01.503) 0:03:41.064 ******** 2025-04-10 00:56:59.355674 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.355681 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.355688 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.355695 | orchestrator | 2025-04-10 00:56:59.355702 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-10 00:56:59.355709 | orchestrator | Thursday 10 April 2025 00:52:45 +0000 (0:00:02.400) 0:03:43.465 ******** 2025-04-10 00:56:59.355716 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.355723 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.355730 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355736 | orchestrator | 2025-04-10 00:56:59.355744 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-10 00:56:59.355750 | orchestrator | Thursday 10 April 2025 00:52:45 +0000 (0:00:00.299) 0:03:43.764 ******** 2025-04-10 00:56:59.355757 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.355764 | orchestrator | 2025-04-10 00:56:59.355771 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-10 00:56:59.355778 | orchestrator | Thursday 10 April 2025 00:52:46 +0000 (0:00:01.365) 0:03:45.130 ******** 2025-04-10 00:56:59.355785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 00:56:59.355796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.355807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 00:56:59.355815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.355822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 00:56:59.355830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.355837 | orchestrator | 2025-04-10 00:56:59.355844 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-10 00:56:59.355851 | orchestrator | Thursday 10 April 2025 00:52:52 +0000 (0:00:05.366) 0:03:50.496 ******** 2025-04-10 00:56:59.355884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 00:56:59.355893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.355900 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.355908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 00:56:59.355915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.355922 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.355933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 00:56:59.355944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.355952 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.355959 | orchestrator | 2025-04-10 00:56:59.355966 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-10 00:56:59.355973 | orchestrator | Thursday 10 April 2025 00:52:53 +0000 (0:00:01.335) 0:03:51.832 ******** 2025-04-10 00:56:59.355980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-10 00:56:59.355988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-10 00:56:59.355999 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.356006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-10 00:56:59.356014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-10 00:56:59.356021 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.356028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-10 00:56:59.356035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-10 00:56:59.356042 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.356049 | orchestrator | 2025-04-10 00:56:59.356056 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-10 00:56:59.356063 | orchestrator | Thursday 10 April 2025 00:52:54 +0000 (0:00:01.514) 0:03:53.347 ******** 2025-04-10 00:56:59.356070 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.356077 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.356084 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.356091 | orchestrator | 2025-04-10 00:56:59.356098 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-10 00:56:59.356105 | orchestrator | Thursday 10 April 2025 00:52:56 +0000 (0:00:01.583) 0:03:54.930 ******** 2025-04-10 00:56:59.356112 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.356119 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.356126 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.356133 | orchestrator | 2025-04-10 00:56:59.356140 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-10 00:56:59.356151 | orchestrator | Thursday 10 April 2025 00:52:58 +0000 (0:00:02.405) 0:03:57.336 ******** 2025-04-10 00:56:59.356158 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.356165 | orchestrator | 2025-04-10 00:56:59.356172 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-10 00:56:59.356179 | orchestrator | Thursday 10 April 2025 00:53:00 +0000 (0:00:01.166) 0:03:58.502 ******** 2025-04-10 00:56:59.356189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-10 00:56:59.356197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-10 00:56:59.356205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-10 00:56:59.356265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356292 | orchestrator | 2025-04-10 00:56:59.356299 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-10 00:56:59.356306 | orchestrator | Thursday 10 April 2025 00:53:04 +0000 (0:00:04.385) 0:04:02.888 ******** 2025-04-10 00:56:59.356316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-10 00:56:59.356324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356346 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.356363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-10 00:56:59.356371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356397 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.356405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-10 00:56:59.356421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.356448 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.356455 | orchestrator | 2025-04-10 00:56:59.356462 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-10 00:56:59.356469 | orchestrator | Thursday 10 April 2025 00:53:05 +0000 (0:00:00.929) 0:04:03.817 ******** 2025-04-10 00:56:59.356476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-10 00:56:59.356486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-10 00:56:59.356494 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.356501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-10 00:56:59.356508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-10 00:56:59.356515 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.356522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-10 00:56:59.356529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-10 00:56:59.356536 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.356543 | orchestrator | 2025-04-10 00:56:59.356550 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-10 00:56:59.356557 | orchestrator | Thursday 10 April 2025 00:53:06 +0000 (0:00:01.247) 0:04:05.065 ******** 2025-04-10 00:56:59.356564 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.356571 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.356578 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.356585 | orchestrator | 2025-04-10 00:56:59.356592 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-10 00:56:59.356603 | orchestrator | Thursday 10 April 2025 00:53:08 +0000 (0:00:01.470) 0:04:06.535 ******** 2025-04-10 00:56:59.356610 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.356617 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.356624 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.356631 | orchestrator | 2025-04-10 00:56:59.356642 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-10 00:56:59.356650 | orchestrator | Thursday 10 April 2025 00:53:10 +0000 (0:00:02.423) 0:04:08.959 ******** 2025-04-10 00:56:59.356657 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.356664 | orchestrator | 2025-04-10 00:56:59.356671 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-10 00:56:59.356678 | orchestrator | Thursday 10 April 2025 00:53:12 +0000 (0:00:01.488) 0:04:10.447 ******** 2025-04-10 00:56:59.356685 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 00:56:59.356692 | orchestrator | 2025-04-10 00:56:59.356699 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-10 00:56:59.356706 | orchestrator | Thursday 10 April 2025 00:53:15 +0000 (0:00:03.605) 0:04:14.053 ******** 2025-04-10 00:56:59.356713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-10 00:56:59.356732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-10 00:56:59.357335 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.357357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-10 00:56:59.357386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-10 00:56:59.357395 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.357412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-10 00:56:59.357426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-10 00:56:59.357433 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.357440 | orchestrator | 2025-04-10 00:56:59.357448 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-10 00:56:59.357455 | orchestrator | Thursday 10 April 2025 00:53:18 +0000 (0:00:03.314) 0:04:17.367 ******** 2025-04-10 00:56:59.357462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-10 00:56:59.357480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-10 00:56:59.357488 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.357495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-10 00:56:59.357512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-10 00:56:59.357519 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.357530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-10 00:56:59.357547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-10 00:56:59.357555 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.357562 | orchestrator | 2025-04-10 00:56:59.357569 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-10 00:56:59.357576 | orchestrator | Thursday 10 April 2025 00:53:22 +0000 (0:00:03.601) 0:04:20.969 ******** 2025-04-10 00:56:59.357583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-10 00:56:59.357591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-10 00:56:59.357598 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.357605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-10 00:56:59.357613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-10 00:56:59.357620 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.357633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-10 00:56:59.357645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-10 00:56:59.357653 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.357660 | orchestrator | 2025-04-10 00:56:59.357667 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-10 00:56:59.357674 | orchestrator | Thursday 10 April 2025 00:53:25 +0000 (0:00:03.333) 0:04:24.303 ******** 2025-04-10 00:56:59.357681 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.357687 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.357694 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.357701 | orchestrator | 2025-04-10 00:56:59.357708 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-10 00:56:59.357715 | orchestrator | Thursday 10 April 2025 00:53:28 +0000 (0:00:02.434) 0:04:26.737 ******** 2025-04-10 00:56:59.357722 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.357729 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.357736 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.357743 | orchestrator | 2025-04-10 00:56:59.357750 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-10 00:56:59.357757 | orchestrator | Thursday 10 April 2025 00:53:30 +0000 (0:00:02.009) 0:04:28.747 ******** 2025-04-10 00:56:59.357764 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.357771 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.357778 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.357784 | orchestrator | 2025-04-10 00:56:59.357791 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-10 00:56:59.357798 | orchestrator | Thursday 10 April 2025 00:53:30 +0000 (0:00:00.312) 0:04:29.060 ******** 2025-04-10 00:56:59.357805 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.357812 | orchestrator | 2025-04-10 00:56:59.357819 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-10 00:56:59.357826 | orchestrator | Thursday 10 April 2025 00:53:32 +0000 (0:00:01.482) 0:04:30.542 ******** 2025-04-10 00:56:59.357833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-10 00:56:59.357841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-10 00:56:59.357876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-10 00:56:59.357885 | orchestrator | 2025-04-10 00:56:59.357892 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-10 00:56:59.357899 | orchestrator | Thursday 10 April 2025 00:53:33 +0000 (0:00:01.657) 0:04:32.200 ******** 2025-04-10 00:56:59.357906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-10 00:56:59.357920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-10 00:56:59.357927 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.357934 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.357941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-10 00:56:59.357949 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.357957 | orchestrator | 2025-04-10 00:56:59.357965 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-10 00:56:59.357977 | orchestrator | Thursday 10 April 2025 00:53:34 +0000 (0:00:00.595) 0:04:32.795 ******** 2025-04-10 00:56:59.357984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-10 00:56:59.357993 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.358001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-10 00:56:59.358009 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.358039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-10 00:56:59.358047 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.358055 | orchestrator | 2025-04-10 00:56:59.358067 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-10 00:56:59.358075 | orchestrator | Thursday 10 April 2025 00:53:35 +0000 (0:00:00.776) 0:04:33.571 ******** 2025-04-10 00:56:59.358083 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.358091 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.358098 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.358106 | orchestrator | 2025-04-10 00:56:59.358115 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-10 00:56:59.358123 | orchestrator | Thursday 10 April 2025 00:53:35 +0000 (0:00:00.723) 0:04:34.295 ******** 2025-04-10 00:56:59.358131 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.358139 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.358146 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.358154 | orchestrator | 2025-04-10 00:56:59.358162 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-10 00:56:59.358169 | orchestrator | Thursday 10 April 2025 00:53:37 +0000 (0:00:01.869) 0:04:36.164 ******** 2025-04-10 00:56:59.358177 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.358185 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.358193 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.358201 | orchestrator | 2025-04-10 00:56:59.358209 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-10 00:56:59.358217 | orchestrator | Thursday 10 April 2025 00:53:38 +0000 (0:00:00.314) 0:04:36.479 ******** 2025-04-10 00:56:59.358225 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.358232 | orchestrator | 2025-04-10 00:56:59.358240 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-10 00:56:59.358247 | orchestrator | Thursday 10 April 2025 00:53:39 +0000 (0:00:01.580) 0:04:38.059 ******** 2025-04-10 00:56:59.358254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 00:56:59.358265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 00:56:59.358312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.358380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 00:56:59.358409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.358423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 00:56:59.358474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.358537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.358569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 00:56:59.358603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 00:56:59.358635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.358699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.358731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358745 | orchestrator | 2025-04-10 00:56:59.358752 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-10 00:56:59.358759 | orchestrator | Thursday 10 April 2025 00:53:45 +0000 (0:00:05.632) 0:04:43.692 ******** 2025-04-10 00:56:59.358770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 00:56:59.358787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 00:56:59.358823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 00:56:59.358854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.358905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.358950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 00:56:59.358971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.358983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.358995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.359041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.359060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.359073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359088 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.359095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.359106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 00:56:59.359117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.359167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.359188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 00:56:59.359195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359202 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.359210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.359346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.359360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 00:56:59.359372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 00:56:59.359399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 00:56:59.359406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.359414 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.359421 | orchestrator | 2025-04-10 00:56:59.359428 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-10 00:56:59.359438 | orchestrator | Thursday 10 April 2025 00:53:47 +0000 (0:00:01.802) 0:04:45.494 ******** 2025-04-10 00:56:59.359445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-10 00:56:59.359453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-10 00:56:59.359464 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.359474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-10 00:56:59.359481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-10 00:56:59.359488 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.359495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-10 00:56:59.359503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-10 00:56:59.359510 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.359520 | orchestrator | 2025-04-10 00:56:59.359527 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-10 00:56:59.359534 | orchestrator | Thursday 10 April 2025 00:53:49 +0000 (0:00:01.951) 0:04:47.446 ******** 2025-04-10 00:56:59.359541 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.359548 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.359558 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.359565 | orchestrator | 2025-04-10 00:56:59.359572 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-10 00:56:59.359579 | orchestrator | Thursday 10 April 2025 00:53:50 +0000 (0:00:01.505) 0:04:48.951 ******** 2025-04-10 00:56:59.359586 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.359593 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.359600 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.359607 | orchestrator | 2025-04-10 00:56:59.359614 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-10 00:56:59.359622 | orchestrator | Thursday 10 April 2025 00:53:53 +0000 (0:00:02.483) 0:04:51.435 ******** 2025-04-10 00:56:59.359629 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.359636 | orchestrator | 2025-04-10 00:56:59.359643 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-10 00:56:59.359650 | orchestrator | Thursday 10 April 2025 00:53:54 +0000 (0:00:01.669) 0:04:53.105 ******** 2025-04-10 00:56:59.359657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.359669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.359681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.359689 | orchestrator | 2025-04-10 00:56:59.359696 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-10 00:56:59.359703 | orchestrator | Thursday 10 April 2025 00:53:58 +0000 (0:00:04.214) 0:04:57.319 ******** 2025-04-10 00:56:59.359714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.359721 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.359728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.359736 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.359748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.359759 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.359766 | orchestrator | 2025-04-10 00:56:59.359773 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-10 00:56:59.359780 | orchestrator | Thursday 10 April 2025 00:53:59 +0000 (0:00:00.523) 0:04:57.843 ******** 2025-04-10 00:56:59.359787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-10 00:56:59.359795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-10 00:56:59.359802 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.359809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-10 00:56:59.359816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-10 00:56:59.359824 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.359831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-10 00:56:59.359838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-10 00:56:59.359845 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.359852 | orchestrator | 2025-04-10 00:56:59.359875 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-10 00:56:59.359885 | orchestrator | Thursday 10 April 2025 00:54:00 +0000 (0:00:01.234) 0:04:59.077 ******** 2025-04-10 00:56:59.359892 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.359899 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.359906 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.359913 | orchestrator | 2025-04-10 00:56:59.359920 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-10 00:56:59.359927 | orchestrator | Thursday 10 April 2025 00:54:02 +0000 (0:00:01.421) 0:05:00.499 ******** 2025-04-10 00:56:59.359934 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.359942 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.359950 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.359958 | orchestrator | 2025-04-10 00:56:59.359965 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-10 00:56:59.359973 | orchestrator | Thursday 10 April 2025 00:54:04 +0000 (0:00:02.184) 0:05:02.683 ******** 2025-04-10 00:56:59.359981 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.359989 | orchestrator | 2025-04-10 00:56:59.359997 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-10 00:56:59.360005 | orchestrator | Thursday 10 April 2025 00:54:06 +0000 (0:00:01.787) 0:05:04.471 ******** 2025-04-10 00:56:59.360017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.360025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.360059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.360080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360108 | orchestrator | 2025-04-10 00:56:59.360116 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-10 00:56:59.360124 | orchestrator | Thursday 10 April 2025 00:54:11 +0000 (0:00:05.898) 0:05:10.369 ******** 2025-04-10 00:56:59.360136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.360148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360165 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.360173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.360190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360211 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.360219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.360227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.360243 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.360251 | orchestrator | 2025-04-10 00:56:59.360259 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-10 00:56:59.360267 | orchestrator | Thursday 10 April 2025 00:54:13 +0000 (0:00:01.234) 0:05:11.604 ******** 2025-04-10 00:56:59.360275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360312 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.360319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360348 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.360355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-10 00:56:59.360383 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.360390 | orchestrator | 2025-04-10 00:56:59.360397 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-10 00:56:59.360404 | orchestrator | Thursday 10 April 2025 00:54:14 +0000 (0:00:01.410) 0:05:13.014 ******** 2025-04-10 00:56:59.360411 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.360418 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.360425 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.360432 | orchestrator | 2025-04-10 00:56:59.360439 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-10 00:56:59.360446 | orchestrator | Thursday 10 April 2025 00:54:16 +0000 (0:00:01.507) 0:05:14.522 ******** 2025-04-10 00:56:59.360453 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.360460 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.360467 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.360474 | orchestrator | 2025-04-10 00:56:59.360481 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-10 00:56:59.360487 | orchestrator | Thursday 10 April 2025 00:54:18 +0000 (0:00:02.501) 0:05:17.023 ******** 2025-04-10 00:56:59.360494 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.360505 | orchestrator | 2025-04-10 00:56:59.360515 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-10 00:56:59.360522 | orchestrator | Thursday 10 April 2025 00:54:20 +0000 (0:00:01.809) 0:05:18.832 ******** 2025-04-10 00:56:59.360529 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-10 00:56:59.360537 | orchestrator | 2025-04-10 00:56:59.360544 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-10 00:56:59.360551 | orchestrator | Thursday 10 April 2025 00:54:21 +0000 (0:00:01.429) 0:05:20.261 ******** 2025-04-10 00:56:59.360561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-10 00:56:59.360574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-10 00:56:59.360582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-10 00:56:59.360589 | orchestrator | 2025-04-10 00:56:59.360597 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-10 00:56:59.360604 | orchestrator | Thursday 10 April 2025 00:54:27 +0000 (0:00:05.614) 0:05:25.876 ******** 2025-04-10 00:56:59.360611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360618 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.360625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360632 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.360640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360652 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.360659 | orchestrator | 2025-04-10 00:56:59.360666 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-10 00:56:59.360673 | orchestrator | Thursday 10 April 2025 00:54:29 +0000 (0:00:02.110) 0:05:27.987 ******** 2025-04-10 00:56:59.360680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-10 00:56:59.360687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-10 00:56:59.360695 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.360702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-10 00:56:59.360714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-10 00:56:59.360721 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.360729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-10 00:56:59.360736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-10 00:56:59.360743 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.360750 | orchestrator | 2025-04-10 00:56:59.360757 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-10 00:56:59.360764 | orchestrator | Thursday 10 April 2025 00:54:31 +0000 (0:00:01.998) 0:05:29.986 ******** 2025-04-10 00:56:59.360771 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.360778 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.360785 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.360792 | orchestrator | 2025-04-10 00:56:59.360799 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-10 00:56:59.360806 | orchestrator | Thursday 10 April 2025 00:54:34 +0000 (0:00:03.102) 0:05:33.088 ******** 2025-04-10 00:56:59.360813 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.360820 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.360827 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.360834 | orchestrator | 2025-04-10 00:56:59.360841 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-10 00:56:59.360848 | orchestrator | Thursday 10 April 2025 00:54:39 +0000 (0:00:04.338) 0:05:37.427 ******** 2025-04-10 00:56:59.360868 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-10 00:56:59.360876 | orchestrator | 2025-04-10 00:56:59.360883 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-10 00:56:59.360890 | orchestrator | Thursday 10 April 2025 00:54:40 +0000 (0:00:01.671) 0:05:39.099 ******** 2025-04-10 00:56:59.360897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360908 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.360916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360923 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.360935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360943 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.360950 | orchestrator | 2025-04-10 00:56:59.360957 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-10 00:56:59.360964 | orchestrator | Thursday 10 April 2025 00:54:42 +0000 (0:00:01.919) 0:05:41.018 ******** 2025-04-10 00:56:59.360974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360981 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.360989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.360996 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.361003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-10 00:56:59.361010 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.361017 | orchestrator | 2025-04-10 00:56:59.361024 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-10 00:56:59.361035 | orchestrator | Thursday 10 April 2025 00:54:44 +0000 (0:00:02.154) 0:05:43.173 ******** 2025-04-10 00:56:59.361042 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.361049 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.361056 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.361066 | orchestrator | 2025-04-10 00:56:59.361073 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-10 00:56:59.361080 | orchestrator | Thursday 10 April 2025 00:54:47 +0000 (0:00:02.365) 0:05:45.538 ******** 2025-04-10 00:56:59.361087 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.361094 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.361101 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.361108 | orchestrator | 2025-04-10 00:56:59.361115 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-10 00:56:59.361122 | orchestrator | Thursday 10 April 2025 00:54:50 +0000 (0:00:03.261) 0:05:48.800 ******** 2025-04-10 00:56:59.361129 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.361136 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.361143 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.361150 | orchestrator | 2025-04-10 00:56:59.361157 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-10 00:56:59.361164 | orchestrator | Thursday 10 April 2025 00:54:54 +0000 (0:00:03.747) 0:05:52.548 ******** 2025-04-10 00:56:59.361172 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-10 00:56:59.361179 | orchestrator | 2025-04-10 00:56:59.361186 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-10 00:56:59.361193 | orchestrator | Thursday 10 April 2025 00:54:55 +0000 (0:00:01.612) 0:05:54.161 ******** 2025-04-10 00:56:59.361200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-10 00:56:59.361207 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.361214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-10 00:56:59.361222 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.361237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-10 00:56:59.361245 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.361252 | orchestrator | 2025-04-10 00:56:59.361259 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-10 00:56:59.361266 | orchestrator | Thursday 10 April 2025 00:54:57 +0000 (0:00:01.937) 0:05:56.098 ******** 2025-04-10 00:56:59.361277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-10 00:56:59.361285 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.361292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-10 00:56:59.361299 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.361306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-10 00:56:59.361314 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.361321 | orchestrator | 2025-04-10 00:56:59.361328 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-10 00:56:59.361335 | orchestrator | Thursday 10 April 2025 00:54:59 +0000 (0:00:01.432) 0:05:57.531 ******** 2025-04-10 00:56:59.361342 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.361349 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.361356 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.361363 | orchestrator | 2025-04-10 00:56:59.361370 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-10 00:56:59.361377 | orchestrator | Thursday 10 April 2025 00:55:01 +0000 (0:00:02.100) 0:05:59.631 ******** 2025-04-10 00:56:59.361384 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.361391 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.361398 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.361405 | orchestrator | 2025-04-10 00:56:59.361412 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-10 00:56:59.361419 | orchestrator | Thursday 10 April 2025 00:55:04 +0000 (0:00:02.907) 0:06:02.539 ******** 2025-04-10 00:56:59.361426 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.361433 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.361440 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.361447 | orchestrator | 2025-04-10 00:56:59.361454 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-10 00:56:59.361461 | orchestrator | Thursday 10 April 2025 00:55:07 +0000 (0:00:03.719) 0:06:06.259 ******** 2025-04-10 00:56:59.361468 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.361475 | orchestrator | 2025-04-10 00:56:59.361482 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-10 00:56:59.361489 | orchestrator | Thursday 10 April 2025 00:55:09 +0000 (0:00:01.778) 0:06:08.037 ******** 2025-04-10 00:56:59.361499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.361510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-10 00:56:59.361524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.361547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.361563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-10 00:56:59.361573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.361595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.361603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-10 00:56:59.361615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.361644 | orchestrator | 2025-04-10 00:56:59.361652 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-10 00:56:59.361659 | orchestrator | Thursday 10 April 2025 00:55:14 +0000 (0:00:04.603) 0:06:12.641 ******** 2025-04-10 00:56:59.361666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.361673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-10 00:56:59.361681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.361777 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.361786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.361794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-10 00:56:59.361801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.361834 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.361897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.361908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-10 00:56:59.361916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-10 00:56:59.361930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-10 00:56:59.361949 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.361957 | orchestrator | 2025-04-10 00:56:59.361964 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-10 00:56:59.361971 | orchestrator | Thursday 10 April 2025 00:55:15 +0000 (0:00:01.005) 0:06:13.647 ******** 2025-04-10 00:56:59.361978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-10 00:56:59.361985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-10 00:56:59.361993 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.362000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-10 00:56:59.362007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-10 00:56:59.362033 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.362061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-10 00:56:59.362070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-10 00:56:59.362077 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.362084 | orchestrator | 2025-04-10 00:56:59.362092 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-10 00:56:59.362099 | orchestrator | Thursday 10 April 2025 00:55:16 +0000 (0:00:01.463) 0:06:15.110 ******** 2025-04-10 00:56:59.362106 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.362113 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.362120 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.362127 | orchestrator | 2025-04-10 00:56:59.362134 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-10 00:56:59.362141 | orchestrator | Thursday 10 April 2025 00:55:18 +0000 (0:00:01.577) 0:06:16.688 ******** 2025-04-10 00:56:59.362148 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.362155 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.362162 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.362169 | orchestrator | 2025-04-10 00:56:59.362176 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-10 00:56:59.362183 | orchestrator | Thursday 10 April 2025 00:55:21 +0000 (0:00:02.830) 0:06:19.518 ******** 2025-04-10 00:56:59.362190 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.362197 | orchestrator | 2025-04-10 00:56:59.362204 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-10 00:56:59.362211 | orchestrator | Thursday 10 April 2025 00:55:22 +0000 (0:00:01.863) 0:06:21.382 ******** 2025-04-10 00:56:59.362218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:56:59.362230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:56:59.362237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:56:59.362261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:56:59.362276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:56:59.362288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:56:59.362296 | orchestrator | 2025-04-10 00:56:59.362303 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-10 00:56:59.362310 | orchestrator | Thursday 10 April 2025 00:55:29 +0000 (0:00:06.945) 0:06:28.328 ******** 2025-04-10 00:56:59.362333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:56:59.362342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:56:59.362355 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.362362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:56:59.362375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:56:59.362383 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.362390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:56:59.362418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:56:59.362428 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.362435 | orchestrator | 2025-04-10 00:56:59.362443 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-10 00:56:59.362455 | orchestrator | Thursday 10 April 2025 00:55:30 +0000 (0:00:01.064) 0:06:29.392 ******** 2025-04-10 00:56:59.362463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-10 00:56:59.362471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-10 00:56:59.362479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-10 00:56:59.362488 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.362499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-10 00:56:59.362507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-10 00:56:59.362515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-10 00:56:59.362523 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.362531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-10 00:56:59.362539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-10 00:56:59.362547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-10 00:56:59.362555 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.362563 | orchestrator | 2025-04-10 00:56:59.362571 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-10 00:56:59.362634 | orchestrator | Thursday 10 April 2025 00:55:32 +0000 (0:00:01.478) 0:06:30.870 ******** 2025-04-10 00:56:59.362643 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.362650 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.362658 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.362666 | orchestrator | 2025-04-10 00:56:59.362673 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-10 00:56:59.362682 | orchestrator | Thursday 10 April 2025 00:55:32 +0000 (0:00:00.470) 0:06:31.341 ******** 2025-04-10 00:56:59.362689 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.362697 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.362705 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.362712 | orchestrator | 2025-04-10 00:56:59.362720 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-10 00:56:59.362728 | orchestrator | Thursday 10 April 2025 00:55:34 +0000 (0:00:01.804) 0:06:33.145 ******** 2025-04-10 00:56:59.362752 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.362761 | orchestrator | 2025-04-10 00:56:59.362769 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-10 00:56:59.362780 | orchestrator | Thursday 10 April 2025 00:55:36 +0000 (0:00:01.923) 0:06:35.069 ******** 2025-04-10 00:56:59.362787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 00:56:59.362795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 00:56:59.362803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.362825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 00:56:59.362851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 00:56:59.362878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 00:56:59.362893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.362908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 00:56:59.362915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.362958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 00:56:59.362966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 00:56:59.362973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.362981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 00:56:59.363032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 00:56:59.363040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 00:56:59.363052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 00:56:59.363070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363129 | orchestrator | 2025-04-10 00:56:59.363136 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-10 00:56:59.363144 | orchestrator | Thursday 10 April 2025 00:55:41 +0000 (0:00:05.125) 0:06:40.195 ******** 2025-04-10 00:56:59.363151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 00:56:59.363158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 00:56:59.363165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 00:56:59.363202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 00:56:59.363210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 00:56:59.363225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 00:56:59.363246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363268 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 00:56:59.363304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 00:56:59.363312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363341 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 00:56:59.363360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 00:56:59.363367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 00:56:59.363400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 00:56:59.363413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 00:56:59.363439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 00:56:59.363446 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363453 | orchestrator | 2025-04-10 00:56:59.363460 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-10 00:56:59.363467 | orchestrator | Thursday 10 April 2025 00:55:43 +0000 (0:00:01.650) 0:06:41.845 ******** 2025-04-10 00:56:59.363474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-10 00:56:59.363482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-10 00:56:59.363489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-10 00:56:59.363497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-10 00:56:59.363511 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-10 00:56:59.363526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-10 00:56:59.363533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-10 00:56:59.363541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-10 00:56:59.363548 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-10 00:56:59.363565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-10 00:56:59.363572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-10 00:56:59.363582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-10 00:56:59.363589 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363600 | orchestrator | 2025-04-10 00:56:59.363607 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-10 00:56:59.363614 | orchestrator | Thursday 10 April 2025 00:55:45 +0000 (0:00:01.785) 0:06:43.630 ******** 2025-04-10 00:56:59.363621 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363628 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363638 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363645 | orchestrator | 2025-04-10 00:56:59.363652 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-10 00:56:59.363659 | orchestrator | Thursday 10 April 2025 00:55:45 +0000 (0:00:00.739) 0:06:44.369 ******** 2025-04-10 00:56:59.363666 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363673 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363680 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363687 | orchestrator | 2025-04-10 00:56:59.363694 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-10 00:56:59.363701 | orchestrator | Thursday 10 April 2025 00:55:48 +0000 (0:00:02.130) 0:06:46.500 ******** 2025-04-10 00:56:59.363708 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.363719 | orchestrator | 2025-04-10 00:56:59.363726 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-10 00:56:59.363733 | orchestrator | Thursday 10 April 2025 00:55:50 +0000 (0:00:01.981) 0:06:48.482 ******** 2025-04-10 00:56:59.363740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:56:59.363748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:56:59.363758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-10 00:56:59.363766 | orchestrator | 2025-04-10 00:56:59.363773 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-10 00:56:59.363780 | orchestrator | Thursday 10 April 2025 00:55:53 +0000 (0:00:03.107) 0:06:51.590 ******** 2025-04-10 00:56:59.363788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-10 00:56:59.363799 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-10 00:56:59.363814 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-10 00:56:59.363829 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363836 | orchestrator | 2025-04-10 00:56:59.363843 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-10 00:56:59.363850 | orchestrator | Thursday 10 April 2025 00:55:53 +0000 (0:00:00.697) 0:06:52.287 ******** 2025-04-10 00:56:59.363869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-10 00:56:59.363877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-10 00:56:59.363884 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363891 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-10 00:56:59.363908 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363915 | orchestrator | 2025-04-10 00:56:59.363922 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-10 00:56:59.363929 | orchestrator | Thursday 10 April 2025 00:55:55 +0000 (0:00:01.167) 0:06:53.454 ******** 2025-04-10 00:56:59.363936 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363947 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363954 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.363961 | orchestrator | 2025-04-10 00:56:59.363968 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-10 00:56:59.363975 | orchestrator | Thursday 10 April 2025 00:55:55 +0000 (0:00:00.464) 0:06:53.919 ******** 2025-04-10 00:56:59.363982 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.363988 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.363995 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364002 | orchestrator | 2025-04-10 00:56:59.364009 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-10 00:56:59.364016 | orchestrator | Thursday 10 April 2025 00:55:57 +0000 (0:00:01.860) 0:06:55.779 ******** 2025-04-10 00:56:59.364023 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:56:59.364030 | orchestrator | 2025-04-10 00:56:59.364037 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-10 00:56:59.364044 | orchestrator | Thursday 10 April 2025 00:55:59 +0000 (0:00:02.002) 0:06:57.782 ******** 2025-04-10 00:56:59.364052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.364059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.364067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.364081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.364089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.364096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-10 00:56:59.364104 | orchestrator | 2025-04-10 00:56:59.364111 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-10 00:56:59.364118 | orchestrator | Thursday 10 April 2025 00:56:07 +0000 (0:00:08.508) 0:07:06.290 ******** 2025-04-10 00:56:59.364125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.364139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.364147 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.364168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.364175 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.364194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-10 00:56:59.364208 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364215 | orchestrator | 2025-04-10 00:56:59.364222 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-10 00:56:59.364229 | orchestrator | Thursday 10 April 2025 00:56:09 +0000 (0:00:01.292) 0:07:07.583 ******** 2025-04-10 00:56:59.364236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364265 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364301 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-10 00:56:59.364339 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364347 | orchestrator | 2025-04-10 00:56:59.364353 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-10 00:56:59.364360 | orchestrator | Thursday 10 April 2025 00:56:10 +0000 (0:00:01.527) 0:07:09.110 ******** 2025-04-10 00:56:59.364367 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.364374 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.364381 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.364388 | orchestrator | 2025-04-10 00:56:59.364396 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-10 00:56:59.364405 | orchestrator | Thursday 10 April 2025 00:56:12 +0000 (0:00:01.542) 0:07:10.653 ******** 2025-04-10 00:56:59.364412 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.364419 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.364426 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.364433 | orchestrator | 2025-04-10 00:56:59.364440 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-10 00:56:59.364447 | orchestrator | Thursday 10 April 2025 00:56:14 +0000 (0:00:02.684) 0:07:13.337 ******** 2025-04-10 00:56:59.364454 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364461 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364470 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364477 | orchestrator | 2025-04-10 00:56:59.364485 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-10 00:56:59.364492 | orchestrator | Thursday 10 April 2025 00:56:15 +0000 (0:00:00.350) 0:07:13.687 ******** 2025-04-10 00:56:59.364498 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364505 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364512 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364519 | orchestrator | 2025-04-10 00:56:59.364526 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-10 00:56:59.364533 | orchestrator | Thursday 10 April 2025 00:56:15 +0000 (0:00:00.610) 0:07:14.298 ******** 2025-04-10 00:56:59.364540 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364547 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364554 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364561 | orchestrator | 2025-04-10 00:56:59.364568 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-10 00:56:59.364575 | orchestrator | Thursday 10 April 2025 00:56:16 +0000 (0:00:00.575) 0:07:14.874 ******** 2025-04-10 00:56:59.364582 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364589 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364596 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364603 | orchestrator | 2025-04-10 00:56:59.364610 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-10 00:56:59.364617 | orchestrator | Thursday 10 April 2025 00:56:17 +0000 (0:00:00.581) 0:07:15.455 ******** 2025-04-10 00:56:59.364624 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364630 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364638 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364644 | orchestrator | 2025-04-10 00:56:59.364651 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-10 00:56:59.364658 | orchestrator | Thursday 10 April 2025 00:56:17 +0000 (0:00:00.344) 0:07:15.800 ******** 2025-04-10 00:56:59.364665 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.364672 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.364679 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.364686 | orchestrator | 2025-04-10 00:56:59.364693 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-10 00:56:59.364706 | orchestrator | Thursday 10 April 2025 00:56:18 +0000 (0:00:01.137) 0:07:16.937 ******** 2025-04-10 00:56:59.364713 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.364720 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.364727 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.364734 | orchestrator | 2025-04-10 00:56:59.364741 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-10 00:56:59.364748 | orchestrator | Thursday 10 April 2025 00:56:19 +0000 (0:00:00.967) 0:07:17.905 ******** 2025-04-10 00:56:59.364755 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.364763 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.364770 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.364777 | orchestrator | 2025-04-10 00:56:59.364784 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-10 00:56:59.364791 | orchestrator | Thursday 10 April 2025 00:56:19 +0000 (0:00:00.357) 0:07:18.262 ******** 2025-04-10 00:56:59.364798 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.364805 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.364812 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.364819 | orchestrator | 2025-04-10 00:56:59.364826 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-10 00:56:59.364833 | orchestrator | Thursday 10 April 2025 00:56:21 +0000 (0:00:01.314) 0:07:19.577 ******** 2025-04-10 00:56:59.364840 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.364846 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.364853 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.364872 | orchestrator | 2025-04-10 00:56:59.364879 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-10 00:56:59.364886 | orchestrator | Thursday 10 April 2025 00:56:22 +0000 (0:00:01.333) 0:07:20.911 ******** 2025-04-10 00:56:59.364893 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.364900 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.364907 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.364914 | orchestrator | 2025-04-10 00:56:59.364921 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-10 00:56:59.364928 | orchestrator | Thursday 10 April 2025 00:56:23 +0000 (0:00:00.983) 0:07:21.894 ******** 2025-04-10 00:56:59.364935 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.364942 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.364949 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.364956 | orchestrator | 2025-04-10 00:56:59.364963 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-10 00:56:59.364970 | orchestrator | Thursday 10 April 2025 00:56:29 +0000 (0:00:05.579) 0:07:27.473 ******** 2025-04-10 00:56:59.364977 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.364984 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.364991 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.364998 | orchestrator | 2025-04-10 00:56:59.365005 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-10 00:56:59.365012 | orchestrator | Thursday 10 April 2025 00:56:32 +0000 (0:00:03.149) 0:07:30.623 ******** 2025-04-10 00:56:59.365019 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.365026 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.365033 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.365040 | orchestrator | 2025-04-10 00:56:59.365047 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-10 00:56:59.365054 | orchestrator | Thursday 10 April 2025 00:56:38 +0000 (0:00:06.720) 0:07:37.344 ******** 2025-04-10 00:56:59.365061 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.365068 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.365075 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.365082 | orchestrator | 2025-04-10 00:56:59.365089 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-10 00:56:59.365099 | orchestrator | Thursday 10 April 2025 00:56:42 +0000 (0:00:03.761) 0:07:41.105 ******** 2025-04-10 00:56:59.365109 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:56:59.365116 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:56:59.365123 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:56:59.365130 | orchestrator | 2025-04-10 00:56:59.365140 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-10 00:56:59.365147 | orchestrator | Thursday 10 April 2025 00:56:51 +0000 (0:00:08.873) 0:07:49.978 ******** 2025-04-10 00:56:59.365154 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.365161 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.365168 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.365175 | orchestrator | 2025-04-10 00:56:59.365182 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-10 00:56:59.365189 | orchestrator | Thursday 10 April 2025 00:56:52 +0000 (0:00:00.631) 0:07:50.610 ******** 2025-04-10 00:56:59.365195 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.365202 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.365210 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.365217 | orchestrator | 2025-04-10 00:56:59.365224 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-10 00:56:59.365231 | orchestrator | Thursday 10 April 2025 00:56:52 +0000 (0:00:00.628) 0:07:51.238 ******** 2025-04-10 00:56:59.365238 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.365244 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.365251 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.365258 | orchestrator | 2025-04-10 00:56:59.365266 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-10 00:56:59.365273 | orchestrator | Thursday 10 April 2025 00:56:53 +0000 (0:00:00.365) 0:07:51.603 ******** 2025-04-10 00:56:59.365280 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.365287 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.365294 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.365301 | orchestrator | 2025-04-10 00:56:59.365308 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-10 00:56:59.365314 | orchestrator | Thursday 10 April 2025 00:56:53 +0000 (0:00:00.641) 0:07:52.245 ******** 2025-04-10 00:56:59.365321 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.365328 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.365335 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.365342 | orchestrator | 2025-04-10 00:56:59.365349 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-10 00:56:59.365356 | orchestrator | Thursday 10 April 2025 00:56:54 +0000 (0:00:00.627) 0:07:52.872 ******** 2025-04-10 00:56:59.365363 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:56:59.365370 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:56:59.365377 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:56:59.365384 | orchestrator | 2025-04-10 00:56:59.365391 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-10 00:56:59.365398 | orchestrator | Thursday 10 April 2025 00:56:54 +0000 (0:00:00.330) 0:07:53.203 ******** 2025-04-10 00:56:59.365405 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.365412 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.365419 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.365426 | orchestrator | 2025-04-10 00:56:59.365433 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-10 00:56:59.365440 | orchestrator | Thursday 10 April 2025 00:56:56 +0000 (0:00:01.302) 0:07:54.506 ******** 2025-04-10 00:56:59.365447 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:56:59.365454 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:56:59.365461 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:56:59.365470 | orchestrator | 2025-04-10 00:56:59.365478 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:56:59.365485 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-10 00:56:59.365496 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-10 00:56:59.365503 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-10 00:56:59.365510 | orchestrator | 2025-04-10 00:56:59.365517 | orchestrator | 2025-04-10 00:56:59.365524 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:56:59.365531 | orchestrator | Thursday 10 April 2025 00:56:57 +0000 (0:00:01.300) 0:07:55.806 ******** 2025-04-10 00:56:59.365538 | orchestrator | =============================================================================== 2025-04-10 00:56:59.365545 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.87s 2025-04-10 00:56:59.365552 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.51s 2025-04-10 00:56:59.365559 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 7.24s 2025-04-10 00:56:59.365566 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.99s 2025-04-10 00:56:59.365573 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 6.95s 2025-04-10 00:56:59.365579 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.95s 2025-04-10 00:56:59.365586 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.72s 2025-04-10 00:56:59.365593 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.14s 2025-04-10 00:56:59.365600 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.09s 2025-04-10 00:56:59.365607 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.06s 2025-04-10 00:56:59.365614 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.90s 2025-04-10 00:56:59.365624 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.81s 2025-04-10 00:56:59.365631 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.63s 2025-04-10 00:56:59.365638 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.61s 2025-04-10 00:56:59.365647 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.58s 2025-04-10 00:57:02.400156 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.53s 2025-04-10 00:57:02.400284 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.37s 2025-04-10 00:57:02.400304 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.27s 2025-04-10 00:57:02.400320 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.21s 2025-04-10 00:57:02.400334 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.19s 2025-04-10 00:57:02.400349 | orchestrator | 2025-04-10 00:56:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:02.400364 | orchestrator | 2025-04-10 00:56:59 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:02.400378 | orchestrator | 2025-04-10 00:56:59 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:02.400392 | orchestrator | 2025-04-10 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:02.400425 | orchestrator | 2025-04-10 00:57:02 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:02.401145 | orchestrator | 2025-04-10 00:57:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:02.401986 | orchestrator | 2025-04-10 00:57:02 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:02.405561 | orchestrator | 2025-04-10 00:57:02 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:02.406601 | orchestrator | 2025-04-10 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:05.453680 | orchestrator | 2025-04-10 00:57:05 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:05.457088 | orchestrator | 2025-04-10 00:57:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:05.460053 | orchestrator | 2025-04-10 00:57:05 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:05.462006 | orchestrator | 2025-04-10 00:57:05 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:08.508830 | orchestrator | 2025-04-10 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:08.508953 | orchestrator | 2025-04-10 00:57:08 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:08.509197 | orchestrator | 2025-04-10 00:57:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:08.510635 | orchestrator | 2025-04-10 00:57:08 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:08.512092 | orchestrator | 2025-04-10 00:57:08 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:11.550698 | orchestrator | 2025-04-10 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:11.550845 | orchestrator | 2025-04-10 00:57:11 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:11.553780 | orchestrator | 2025-04-10 00:57:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:11.554661 | orchestrator | 2025-04-10 00:57:11 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:11.556137 | orchestrator | 2025-04-10 00:57:11 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:14.612592 | orchestrator | 2025-04-10 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:14.612739 | orchestrator | 2025-04-10 00:57:14 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:14.616016 | orchestrator | 2025-04-10 00:57:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:14.621120 | orchestrator | 2025-04-10 00:57:14 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:14.625041 | orchestrator | 2025-04-10 00:57:14 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:17.673044 | orchestrator | 2025-04-10 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:17.673188 | orchestrator | 2025-04-10 00:57:17 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:17.674197 | orchestrator | 2025-04-10 00:57:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:17.676354 | orchestrator | 2025-04-10 00:57:17 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:17.676395 | orchestrator | 2025-04-10 00:57:17 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:20.732421 | orchestrator | 2025-04-10 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:20.732678 | orchestrator | 2025-04-10 00:57:20 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:20.732717 | orchestrator | 2025-04-10 00:57:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:20.733688 | orchestrator | 2025-04-10 00:57:20 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:20.734391 | orchestrator | 2025-04-10 00:57:20 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:23.780178 | orchestrator | 2025-04-10 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:23.780319 | orchestrator | 2025-04-10 00:57:23 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:23.780960 | orchestrator | 2025-04-10 00:57:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:23.780996 | orchestrator | 2025-04-10 00:57:23 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:23.781646 | orchestrator | 2025-04-10 00:57:23 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:26.818748 | orchestrator | 2025-04-10 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:26.818963 | orchestrator | 2025-04-10 00:57:26 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:26.819114 | orchestrator | 2025-04-10 00:57:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:26.819981 | orchestrator | 2025-04-10 00:57:26 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:26.820754 | orchestrator | 2025-04-10 00:57:26 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:26.820893 | orchestrator | 2025-04-10 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:29.859278 | orchestrator | 2025-04-10 00:57:29 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:29.863111 | orchestrator | 2025-04-10 00:57:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:29.864337 | orchestrator | 2025-04-10 00:57:29 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:29.864914 | orchestrator | 2025-04-10 00:57:29 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:32.900616 | orchestrator | 2025-04-10 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:32.900761 | orchestrator | 2025-04-10 00:57:32 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:32.905070 | orchestrator | 2025-04-10 00:57:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:32.905754 | orchestrator | 2025-04-10 00:57:32 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:32.905785 | orchestrator | 2025-04-10 00:57:32 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:35.963656 | orchestrator | 2025-04-10 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:35.963786 | orchestrator | 2025-04-10 00:57:35 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:35.965777 | orchestrator | 2025-04-10 00:57:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:35.966125 | orchestrator | 2025-04-10 00:57:35 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:35.968150 | orchestrator | 2025-04-10 00:57:35 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:35.968416 | orchestrator | 2025-04-10 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:39.027260 | orchestrator | 2025-04-10 00:57:39 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:39.029652 | orchestrator | 2025-04-10 00:57:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:39.033118 | orchestrator | 2025-04-10 00:57:39 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:39.034489 | orchestrator | 2025-04-10 00:57:39 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:42.087396 | orchestrator | 2025-04-10 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:42.087536 | orchestrator | 2025-04-10 00:57:42 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:42.088485 | orchestrator | 2025-04-10 00:57:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:42.089729 | orchestrator | 2025-04-10 00:57:42 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:42.092703 | orchestrator | 2025-04-10 00:57:42 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:45.130583 | orchestrator | 2025-04-10 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:45.130733 | orchestrator | 2025-04-10 00:57:45 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:45.131319 | orchestrator | 2025-04-10 00:57:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:45.132586 | orchestrator | 2025-04-10 00:57:45 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:45.133936 | orchestrator | 2025-04-10 00:57:45 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:48.166981 | orchestrator | 2025-04-10 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:48.167113 | orchestrator | 2025-04-10 00:57:48 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:48.171219 | orchestrator | 2025-04-10 00:57:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:48.173631 | orchestrator | 2025-04-10 00:57:48 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:48.175366 | orchestrator | 2025-04-10 00:57:48 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:48.175771 | orchestrator | 2025-04-10 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:51.227825 | orchestrator | 2025-04-10 00:57:51 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:51.229074 | orchestrator | 2025-04-10 00:57:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:51.229119 | orchestrator | 2025-04-10 00:57:51 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:51.232280 | orchestrator | 2025-04-10 00:57:51 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:54.285081 | orchestrator | 2025-04-10 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:54.285215 | orchestrator | 2025-04-10 00:57:54 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:54.287275 | orchestrator | 2025-04-10 00:57:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:54.292457 | orchestrator | 2025-04-10 00:57:54 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:54.293989 | orchestrator | 2025-04-10 00:57:54 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:57:54.294165 | orchestrator | 2025-04-10 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:57:57.354987 | orchestrator | 2025-04-10 00:57:57 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:57:57.356232 | orchestrator | 2025-04-10 00:57:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:57:57.358161 | orchestrator | 2025-04-10 00:57:57 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:57:57.360365 | orchestrator | 2025-04-10 00:57:57 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:00.426498 | orchestrator | 2025-04-10 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:00.426587 | orchestrator | 2025-04-10 00:58:00 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:00.427596 | orchestrator | 2025-04-10 00:58:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:00.430196 | orchestrator | 2025-04-10 00:58:00 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:00.431786 | orchestrator | 2025-04-10 00:58:00 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:03.477454 | orchestrator | 2025-04-10 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:03.477598 | orchestrator | 2025-04-10 00:58:03 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:03.478752 | orchestrator | 2025-04-10 00:58:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:03.479728 | orchestrator | 2025-04-10 00:58:03 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:03.481343 | orchestrator | 2025-04-10 00:58:03 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:06.515161 | orchestrator | 2025-04-10 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:06.515299 | orchestrator | 2025-04-10 00:58:06 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:06.525936 | orchestrator | 2025-04-10 00:58:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:06.528033 | orchestrator | 2025-04-10 00:58:06 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:06.529699 | orchestrator | 2025-04-10 00:58:06 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:09.576802 | orchestrator | 2025-04-10 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:09.576974 | orchestrator | 2025-04-10 00:58:09 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:09.578735 | orchestrator | 2025-04-10 00:58:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:09.581405 | orchestrator | 2025-04-10 00:58:09 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:09.583799 | orchestrator | 2025-04-10 00:58:09 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:12.632135 | orchestrator | 2025-04-10 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:12.632307 | orchestrator | 2025-04-10 00:58:12 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:12.633753 | orchestrator | 2025-04-10 00:58:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:12.635505 | orchestrator | 2025-04-10 00:58:12 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:12.637227 | orchestrator | 2025-04-10 00:58:12 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:15.694615 | orchestrator | 2025-04-10 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:15.694761 | orchestrator | 2025-04-10 00:58:15 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:15.695612 | orchestrator | 2025-04-10 00:58:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:15.695649 | orchestrator | 2025-04-10 00:58:15 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:15.697763 | orchestrator | 2025-04-10 00:58:15 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:18.744352 | orchestrator | 2025-04-10 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:18.744523 | orchestrator | 2025-04-10 00:58:18 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:18.745793 | orchestrator | 2025-04-10 00:58:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:18.747460 | orchestrator | 2025-04-10 00:58:18 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:18.750227 | orchestrator | 2025-04-10 00:58:18 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:21.810639 | orchestrator | 2025-04-10 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:21.810811 | orchestrator | 2025-04-10 00:58:21 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:21.811699 | orchestrator | 2025-04-10 00:58:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:21.812762 | orchestrator | 2025-04-10 00:58:21 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:21.814355 | orchestrator | 2025-04-10 00:58:21 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:24.866963 | orchestrator | 2025-04-10 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:24.867093 | orchestrator | 2025-04-10 00:58:24 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:24.873200 | orchestrator | 2025-04-10 00:58:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:24.877582 | orchestrator | 2025-04-10 00:58:24 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:24.881618 | orchestrator | 2025-04-10 00:58:24 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:27.944785 | orchestrator | 2025-04-10 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:27.944990 | orchestrator | 2025-04-10 00:58:27 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:27.948213 | orchestrator | 2025-04-10 00:58:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:27.951098 | orchestrator | 2025-04-10 00:58:27 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:27.952593 | orchestrator | 2025-04-10 00:58:27 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:27.953773 | orchestrator | 2025-04-10 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:31.036110 | orchestrator | 2025-04-10 00:58:31 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:31.037074 | orchestrator | 2025-04-10 00:58:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:31.037109 | orchestrator | 2025-04-10 00:58:31 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:31.038294 | orchestrator | 2025-04-10 00:58:31 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:31.038367 | orchestrator | 2025-04-10 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:34.093606 | orchestrator | 2025-04-10 00:58:34 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:34.094917 | orchestrator | 2025-04-10 00:58:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:34.095777 | orchestrator | 2025-04-10 00:58:34 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:34.097015 | orchestrator | 2025-04-10 00:58:34 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:34.097118 | orchestrator | 2025-04-10 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:37.138461 | orchestrator | 2025-04-10 00:58:37 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:37.140270 | orchestrator | 2025-04-10 00:58:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:37.142363 | orchestrator | 2025-04-10 00:58:37 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:37.143610 | orchestrator | 2025-04-10 00:58:37 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:40.188044 | orchestrator | 2025-04-10 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:40.188191 | orchestrator | 2025-04-10 00:58:40 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:40.190643 | orchestrator | 2025-04-10 00:58:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:40.191986 | orchestrator | 2025-04-10 00:58:40 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:40.193126 | orchestrator | 2025-04-10 00:58:40 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:43.255501 | orchestrator | 2025-04-10 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:43.255641 | orchestrator | 2025-04-10 00:58:43 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:43.256582 | orchestrator | 2025-04-10 00:58:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:43.256616 | orchestrator | 2025-04-10 00:58:43 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:43.258602 | orchestrator | 2025-04-10 00:58:43 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:43.260701 | orchestrator | 2025-04-10 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:46.304953 | orchestrator | 2025-04-10 00:58:46 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:46.307252 | orchestrator | 2025-04-10 00:58:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:46.308222 | orchestrator | 2025-04-10 00:58:46 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:46.309999 | orchestrator | 2025-04-10 00:58:46 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:46.310159 | orchestrator | 2025-04-10 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:49.375169 | orchestrator | 2025-04-10 00:58:49 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:49.377129 | orchestrator | 2025-04-10 00:58:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:49.379412 | orchestrator | 2025-04-10 00:58:49 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:49.381286 | orchestrator | 2025-04-10 00:58:49 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:52.432550 | orchestrator | 2025-04-10 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:52.432696 | orchestrator | 2025-04-10 00:58:52 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:52.434119 | orchestrator | 2025-04-10 00:58:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:52.436937 | orchestrator | 2025-04-10 00:58:52 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:52.440073 | orchestrator | 2025-04-10 00:58:52 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:55.490912 | orchestrator | 2025-04-10 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:55.491049 | orchestrator | 2025-04-10 00:58:55 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:58.545660 | orchestrator | 2025-04-10 00:58:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:58.545803 | orchestrator | 2025-04-10 00:58:55 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:58.545818 | orchestrator | 2025-04-10 00:58:55 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:58:58.545830 | orchestrator | 2025-04-10 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:58:58.545891 | orchestrator | 2025-04-10 00:58:58 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:58:58.547156 | orchestrator | 2025-04-10 00:58:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:58:58.548600 | orchestrator | 2025-04-10 00:58:58 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:58:58.549918 | orchestrator | 2025-04-10 00:58:58 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:01.591524 | orchestrator | 2025-04-10 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:01.591706 | orchestrator | 2025-04-10 00:59:01 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:59:01.592384 | orchestrator | 2025-04-10 00:59:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:01.592421 | orchestrator | 2025-04-10 00:59:01 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:01.593155 | orchestrator | 2025-04-10 00:59:01 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:04.636645 | orchestrator | 2025-04-10 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:04.636821 | orchestrator | 2025-04-10 00:59:04 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:59:04.639413 | orchestrator | 2025-04-10 00:59:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:04.639486 | orchestrator | 2025-04-10 00:59:04 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:07.698232 | orchestrator | 2025-04-10 00:59:04 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:07.698391 | orchestrator | 2025-04-10 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:07.698559 | orchestrator | 2025-04-10 00:59:07 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state STARTED 2025-04-10 00:59:07.699613 | orchestrator | 2025-04-10 00:59:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:07.699646 | orchestrator | 2025-04-10 00:59:07 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:07.700781 | orchestrator | 2025-04-10 00:59:07 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:10.753261 | orchestrator | 2025-04-10 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:10.753406 | orchestrator | 2025-04-10 00:59:10 | INFO  | Task dd602045-9574-4bdb-9de7-a6b6ff6778ba is in state SUCCESS 2025-04-10 00:59:10.754286 | orchestrator | 2025-04-10 00:59:10.754320 | orchestrator | 2025-04-10 00:59:10.754335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 00:59:10.754350 | orchestrator | 2025-04-10 00:59:10.754364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 00:59:10.754378 | orchestrator | Thursday 10 April 2025 00:57:01 +0000 (0:00:00.338) 0:00:00.338 ******** 2025-04-10 00:59:10.754392 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:59:10.754408 | orchestrator | ok: [testbed-node-1] 2025-04-10 00:59:10.754423 | orchestrator | ok: [testbed-node-2] 2025-04-10 00:59:10.754437 | orchestrator | 2025-04-10 00:59:10.754451 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 00:59:10.754465 | orchestrator | Thursday 10 April 2025 00:57:02 +0000 (0:00:00.433) 0:00:00.772 ******** 2025-04-10 00:59:10.754588 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-04-10 00:59:10.754619 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-04-10 00:59:10.754634 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-04-10 00:59:10.754648 | orchestrator | 2025-04-10 00:59:10.754662 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-04-10 00:59:10.754675 | orchestrator | 2025-04-10 00:59:10.754689 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-10 00:59:10.754703 | orchestrator | Thursday 10 April 2025 00:57:02 +0000 (0:00:00.310) 0:00:01.082 ******** 2025-04-10 00:59:10.754718 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:59:10.754732 | orchestrator | 2025-04-10 00:59:10.754965 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-04-10 00:59:10.754989 | orchestrator | Thursday 10 April 2025 00:57:03 +0000 (0:00:00.771) 0:00:01.853 ******** 2025-04-10 00:59:10.755004 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-10 00:59:10.755019 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-10 00:59:10.755033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-10 00:59:10.755047 | orchestrator | 2025-04-10 00:59:10.755062 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-04-10 00:59:10.755076 | orchestrator | Thursday 10 April 2025 00:57:03 +0000 (0:00:00.791) 0:00:02.645 ******** 2025-04-10 00:59:10.755092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.755176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.755207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.755225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.755242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.755279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.755296 | orchestrator | 2025-04-10 00:59:10.755312 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-10 00:59:10.755327 | orchestrator | Thursday 10 April 2025 00:57:05 +0000 (0:00:01.499) 0:00:04.144 ******** 2025-04-10 00:59:10.755342 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:59:10.755356 | orchestrator | 2025-04-10 00:59:10.755371 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-04-10 00:59:10.755386 | orchestrator | Thursday 10 April 2025 00:57:06 +0000 (0:00:00.799) 0:00:04.944 ******** 2025-04-10 00:59:10.755413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.755430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.755447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.755476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.755507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.755525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.755550 | orchestrator | 2025-04-10 00:59:10.755566 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-04-10 00:59:10.755590 | orchestrator | Thursday 10 April 2025 00:57:09 +0000 (0:00:03.319) 0:00:08.264 ******** 2025-04-10 00:59:10.755607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:59:10.755625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:59:10.755642 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:59:10.755669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:59:10.755687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:59:10.755720 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:59:10.755737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:59:10.755755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:59:10.755771 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:59:10.755787 | orchestrator | 2025-04-10 00:59:10.755803 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-04-10 00:59:10.755826 | orchestrator | Thursday 10 April 2025 00:57:10 +0000 (0:00:01.081) 0:00:09.345 ******** 2025-04-10 00:59:10.755873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:59:10.755891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:59:10.755924 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:59:10.755939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:59:10.755953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:59:10.755968 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:59:10.755988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-10 00:59:10.756003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-10 00:59:10.756116 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:59:10.756136 | orchestrator | 2025-04-10 00:59:10.756151 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-04-10 00:59:10.756165 | orchestrator | Thursday 10 April 2025 00:57:12 +0000 (0:00:01.597) 0:00:10.942 ******** 2025-04-10 00:59:10.756179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.756213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.756229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.756253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.756286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.756301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.756316 | orchestrator | 2025-04-10 00:59:10.756330 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-04-10 00:59:10.756345 | orchestrator | Thursday 10 April 2025 00:57:15 +0000 (0:00:03.030) 0:00:13.973 ******** 2025-04-10 00:59:10.756358 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:59:10.756372 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:59:10.756387 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:59:10.756401 | orchestrator | 2025-04-10 00:59:10.756415 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-04-10 00:59:10.756428 | orchestrator | Thursday 10 April 2025 00:57:19 +0000 (0:00:04.054) 0:00:18.028 ******** 2025-04-10 00:59:10.756442 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:59:10.756456 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:59:10.756470 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:59:10.756484 | orchestrator | 2025-04-10 00:59:10.756498 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-04-10 00:59:10.756512 | orchestrator | Thursday 10 April 2025 00:57:21 +0000 (0:00:02.318) 0:00:20.346 ******** 2025-04-10 00:59:10.756534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.756556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.756571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-10 00:59:10.756595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.756618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.756639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-10 00:59:10.756654 | orchestrator | 2025-04-10 00:59:10.756669 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-10 00:59:10.756683 | orchestrator | Thursday 10 April 2025 00:57:25 +0000 (0:00:03.639) 0:00:23.985 ******** 2025-04-10 00:59:10.756697 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:59:10.756711 | orchestrator | skipping: [testbed-node-1] 2025-04-10 00:59:10.756725 | orchestrator | skipping: [testbed-node-2] 2025-04-10 00:59:10.756739 | orchestrator | 2025-04-10 00:59:10.756755 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-10 00:59:10.756770 | orchestrator | Thursday 10 April 2025 00:57:25 +0000 (0:00:00.362) 0:00:24.348 ******** 2025-04-10 00:59:10.756786 | orchestrator | 2025-04-10 00:59:10.756801 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-10 00:59:10.756816 | orchestrator | Thursday 10 April 2025 00:57:25 +0000 (0:00:00.235) 0:00:24.583 ******** 2025-04-10 00:59:10.756832 | orchestrator | 2025-04-10 00:59:10.756880 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-10 00:59:10.756897 | orchestrator | Thursday 10 April 2025 00:57:25 +0000 (0:00:00.057) 0:00:24.641 ******** 2025-04-10 00:59:10.756912 | orchestrator | 2025-04-10 00:59:10.756931 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-04-10 00:59:10.756947 | orchestrator | Thursday 10 April 2025 00:57:25 +0000 (0:00:00.066) 0:00:24.707 ******** 2025-04-10 00:59:10.756963 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:59:10.756980 | orchestrator | 2025-04-10 00:59:10.756996 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-04-10 00:59:10.757011 | orchestrator | Thursday 10 April 2025 00:57:26 +0000 (0:00:00.303) 0:00:25.011 ******** 2025-04-10 00:59:10.757027 | orchestrator | skipping: [testbed-node-0] 2025-04-10 00:59:10.757043 | orchestrator | 2025-04-10 00:59:10.757058 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-04-10 00:59:10.757074 | orchestrator | Thursday 10 April 2025 00:57:27 +0000 (0:00:00.954) 0:00:25.965 ******** 2025-04-10 00:59:10.757090 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:59:10.757106 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:59:10.757119 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:59:10.757133 | orchestrator | 2025-04-10 00:59:10.757147 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-04-10 00:59:10.757161 | orchestrator | Thursday 10 April 2025 00:57:58 +0000 (0:00:31.546) 0:00:57.511 ******** 2025-04-10 00:59:10.757175 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:59:10.757189 | orchestrator | changed: [testbed-node-2] 2025-04-10 00:59:10.757298 | orchestrator | changed: [testbed-node-1] 2025-04-10 00:59:10.757319 | orchestrator | 2025-04-10 00:59:10.757341 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-10 00:59:10.757356 | orchestrator | Thursday 10 April 2025 00:58:55 +0000 (0:00:56.325) 0:01:53.837 ******** 2025-04-10 00:59:10.757371 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 00:59:10.757386 | orchestrator | 2025-04-10 00:59:10.757401 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-04-10 00:59:10.757416 | orchestrator | Thursday 10 April 2025 00:58:55 +0000 (0:00:00.810) 0:01:54.647 ******** 2025-04-10 00:59:10.757488 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:59:10.757502 | orchestrator | 2025-04-10 00:59:10.757516 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-04-10 00:59:10.757530 | orchestrator | Thursday 10 April 2025 00:58:58 +0000 (0:00:02.778) 0:01:57.425 ******** 2025-04-10 00:59:10.757544 | orchestrator | ok: [testbed-node-0] 2025-04-10 00:59:10.757558 | orchestrator | 2025-04-10 00:59:10.757572 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-04-10 00:59:10.757592 | orchestrator | Thursday 10 April 2025 00:59:01 +0000 (0:00:02.598) 0:02:00.024 ******** 2025-04-10 00:59:10.757606 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:59:10.757621 | orchestrator | 2025-04-10 00:59:10.757635 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-04-10 00:59:10.757649 | orchestrator | Thursday 10 April 2025 00:59:04 +0000 (0:00:03.126) 0:02:03.150 ******** 2025-04-10 00:59:10.757663 | orchestrator | changed: [testbed-node-0] 2025-04-10 00:59:10.757677 | orchestrator | 2025-04-10 00:59:10.757699 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 00:59:13.810258 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 00:59:13.810345 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:59:13.810354 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-10 00:59:13.810360 | orchestrator | 2025-04-10 00:59:13.810367 | orchestrator | 2025-04-10 00:59:13.810373 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 00:59:13.810381 | orchestrator | Thursday 10 April 2025 00:59:07 +0000 (0:00:03.157) 0:02:06.307 ******** 2025-04-10 00:59:13.810387 | orchestrator | =============================================================================== 2025-04-10 00:59:13.810393 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 56.33s 2025-04-10 00:59:13.810399 | orchestrator | opensearch : Restart opensearch container ------------------------------ 31.55s 2025-04-10 00:59:13.810405 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.05s 2025-04-10 00:59:13.810411 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.64s 2025-04-10 00:59:13.810417 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.32s 2025-04-10 00:59:13.810423 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.16s 2025-04-10 00:59:13.810429 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.13s 2025-04-10 00:59:13.810434 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.03s 2025-04-10 00:59:13.810441 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.78s 2025-04-10 00:59:13.810447 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.60s 2025-04-10 00:59:13.810453 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.32s 2025-04-10 00:59:13.810459 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.60s 2025-04-10 00:59:13.810465 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.50s 2025-04-10 00:59:13.810492 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.08s 2025-04-10 00:59:13.810499 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.95s 2025-04-10 00:59:13.810505 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.81s 2025-04-10 00:59:13.810510 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.80s 2025-04-10 00:59:13.810516 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.79s 2025-04-10 00:59:13.810522 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.77s 2025-04-10 00:59:13.810528 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-04-10 00:59:13.810534 | orchestrator | 2025-04-10 00:59:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:13.810540 | orchestrator | 2025-04-10 00:59:10 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:13.810546 | orchestrator | 2025-04-10 00:59:10 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:13.810553 | orchestrator | 2025-04-10 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:13.810569 | orchestrator | 2025-04-10 00:59:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:13.812032 | orchestrator | 2025-04-10 00:59:13 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:13.816259 | orchestrator | 2025-04-10 00:59:13 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:16.866136 | orchestrator | 2025-04-10 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:16.866279 | orchestrator | 2025-04-10 00:59:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:16.866817 | orchestrator | 2025-04-10 00:59:16 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:16.866897 | orchestrator | 2025-04-10 00:59:16 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:19.908573 | orchestrator | 2025-04-10 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:19.908722 | orchestrator | 2025-04-10 00:59:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:19.910879 | orchestrator | 2025-04-10 00:59:19 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:19.912620 | orchestrator | 2025-04-10 00:59:19 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:22.969399 | orchestrator | 2025-04-10 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:22.969590 | orchestrator | 2025-04-10 00:59:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:22.973345 | orchestrator | 2025-04-10 00:59:22 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:22.975358 | orchestrator | 2025-04-10 00:59:22 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:22.975665 | orchestrator | 2025-04-10 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:26.043338 | orchestrator | 2025-04-10 00:59:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:26.044443 | orchestrator | 2025-04-10 00:59:26 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:26.044510 | orchestrator | 2025-04-10 00:59:26 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:29.095719 | orchestrator | 2025-04-10 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:29.096171 | orchestrator | 2025-04-10 00:59:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:29.097121 | orchestrator | 2025-04-10 00:59:29 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:29.097158 | orchestrator | 2025-04-10 00:59:29 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:32.152644 | orchestrator | 2025-04-10 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:32.152740 | orchestrator | 2025-04-10 00:59:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:32.155369 | orchestrator | 2025-04-10 00:59:32 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:32.156472 | orchestrator | 2025-04-10 00:59:32 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:35.203418 | orchestrator | 2025-04-10 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:35.203547 | orchestrator | 2025-04-10 00:59:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:35.205066 | orchestrator | 2025-04-10 00:59:35 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:35.206546 | orchestrator | 2025-04-10 00:59:35 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:38.253590 | orchestrator | 2025-04-10 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:38.253751 | orchestrator | 2025-04-10 00:59:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:38.254814 | orchestrator | 2025-04-10 00:59:38 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:38.256217 | orchestrator | 2025-04-10 00:59:38 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:41.297241 | orchestrator | 2025-04-10 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:41.297390 | orchestrator | 2025-04-10 00:59:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:41.298377 | orchestrator | 2025-04-10 00:59:41 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:41.300965 | orchestrator | 2025-04-10 00:59:41 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:44.347982 | orchestrator | 2025-04-10 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:44.348117 | orchestrator | 2025-04-10 00:59:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:44.349573 | orchestrator | 2025-04-10 00:59:44 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:44.351320 | orchestrator | 2025-04-10 00:59:44 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:47.401954 | orchestrator | 2025-04-10 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:47.402150 | orchestrator | 2025-04-10 00:59:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:47.403481 | orchestrator | 2025-04-10 00:59:47 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:47.405023 | orchestrator | 2025-04-10 00:59:47 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:50.452707 | orchestrator | 2025-04-10 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:50.452816 | orchestrator | 2025-04-10 00:59:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:50.455406 | orchestrator | 2025-04-10 00:59:50 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:50.458002 | orchestrator | 2025-04-10 00:59:50 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:53.508213 | orchestrator | 2025-04-10 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:53.508379 | orchestrator | 2025-04-10 00:59:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:53.512060 | orchestrator | 2025-04-10 00:59:53 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:53.513258 | orchestrator | 2025-04-10 00:59:53 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:53.513615 | orchestrator | 2025-04-10 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:56.553762 | orchestrator | 2025-04-10 00:59:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:56.554355 | orchestrator | 2025-04-10 00:59:56 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:56.555353 | orchestrator | 2025-04-10 00:59:56 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:59.612151 | orchestrator | 2025-04-10 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 00:59:59.612293 | orchestrator | 2025-04-10 00:59:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 00:59:59.613602 | orchestrator | 2025-04-10 00:59:59 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 00:59:59.615698 | orchestrator | 2025-04-10 00:59:59 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 00:59:59.615732 | orchestrator | 2025-04-10 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:02.672567 | orchestrator | 2025-04-10 01:00:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:02.674079 | orchestrator | 2025-04-10 01:00:02 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:02.675031 | orchestrator | 2025-04-10 01:00:02 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 01:00:02.675250 | orchestrator | 2025-04-10 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:05.730726 | orchestrator | 2025-04-10 01:00:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:05.732568 | orchestrator | 2025-04-10 01:00:05 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:05.734205 | orchestrator | 2025-04-10 01:00:05 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 01:00:05.734644 | orchestrator | 2025-04-10 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:08.775539 | orchestrator | 2025-04-10 01:00:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:08.776657 | orchestrator | 2025-04-10 01:00:08 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:08.778680 | orchestrator | 2025-04-10 01:00:08 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state STARTED 2025-04-10 01:00:08.779001 | orchestrator | 2025-04-10 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:11.841087 | orchestrator | 2025-04-10 01:00:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:11.843007 | orchestrator | 2025-04-10 01:00:11 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:11.844405 | orchestrator | 2025-04-10 01:00:11 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:11.851129 | orchestrator | 2025-04-10 01:00:11 | INFO  | Task 1d637a5d-a7f8-4e18-b3a3-a9ff6d4a72ce is in state SUCCESS 2025-04-10 01:00:11.853235 | orchestrator | 2025-04-10 01:00:11.853282 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-10 01:00:11.853298 | orchestrator | 2025-04-10 01:00:11.853313 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-04-10 01:00:11.853328 | orchestrator | 2025-04-10 01:00:11.853342 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-10 01:00:11.853357 | orchestrator | Thursday 10 April 2025 00:46:19 +0000 (0:00:02.359) 0:00:02.359 ******** 2025-04-10 01:00:11.853372 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.853387 | orchestrator | 2025-04-10 01:00:11.853402 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-10 01:00:11.853433 | orchestrator | Thursday 10 April 2025 00:46:21 +0000 (0:00:01.603) 0:00:03.963 ******** 2025-04-10 01:00:11.853449 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.853464 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-10 01:00:11.853478 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-10 01:00:11.853492 | orchestrator | 2025-04-10 01:00:11.853508 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-10 01:00:11.853531 | orchestrator | Thursday 10 April 2025 00:46:22 +0000 (0:00:00.865) 0:00:04.828 ******** 2025-04-10 01:00:11.853555 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.853579 | orchestrator | 2025-04-10 01:00:11.853601 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-10 01:00:11.853801 | orchestrator | Thursday 10 April 2025 00:46:23 +0000 (0:00:01.581) 0:00:06.410 ******** 2025-04-10 01:00:11.853993 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.854606 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.854763 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.854964 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.855066 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.855094 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.855116 | orchestrator | 2025-04-10 01:00:11.855140 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-10 01:00:11.855164 | orchestrator | Thursday 10 April 2025 00:46:25 +0000 (0:00:01.616) 0:00:08.027 ******** 2025-04-10 01:00:11.855255 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.855281 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.855304 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.855329 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.855806 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.855824 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.855931 | orchestrator | 2025-04-10 01:00:11.855955 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-10 01:00:11.855971 | orchestrator | Thursday 10 April 2025 00:46:26 +0000 (0:00:00.879) 0:00:08.906 ******** 2025-04-10 01:00:11.855986 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.856001 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.856105 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.856445 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.856494 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.856536 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.856551 | orchestrator | 2025-04-10 01:00:11.856566 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-10 01:00:11.856581 | orchestrator | Thursday 10 April 2025 00:46:27 +0000 (0:00:01.128) 0:00:10.034 ******** 2025-04-10 01:00:11.856594 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.856608 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.856623 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.856647 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.856822 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.856838 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.856873 | orchestrator | 2025-04-10 01:00:11.856887 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-10 01:00:11.856902 | orchestrator | Thursday 10 April 2025 00:46:28 +0000 (0:00:01.097) 0:00:11.132 ******** 2025-04-10 01:00:11.856916 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.856930 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.856944 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.856958 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.857108 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.857472 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.857490 | orchestrator | 2025-04-10 01:00:11.857505 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-10 01:00:11.857519 | orchestrator | Thursday 10 April 2025 00:46:29 +0000 (0:00:00.993) 0:00:12.125 ******** 2025-04-10 01:00:11.857533 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.857547 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.857561 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.857575 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.857588 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.857602 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.857617 | orchestrator | 2025-04-10 01:00:11.857666 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-10 01:00:11.857682 | orchestrator | Thursday 10 April 2025 00:46:31 +0000 (0:00:01.830) 0:00:13.955 ******** 2025-04-10 01:00:11.857696 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.857712 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.857726 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.857789 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.857804 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.858363 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.858433 | orchestrator | 2025-04-10 01:00:11.858451 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-10 01:00:11.858489 | orchestrator | Thursday 10 April 2025 00:46:32 +0000 (0:00:01.244) 0:00:15.200 ******** 2025-04-10 01:00:11.858503 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.858515 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.858772 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.858786 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.858799 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.858811 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.858823 | orchestrator | 2025-04-10 01:00:11.858941 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-10 01:00:11.858961 | orchestrator | Thursday 10 April 2025 00:46:33 +0000 (0:00:01.373) 0:00:16.574 ******** 2025-04-10 01:00:11.858974 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.858987 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:00:11.858999 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:00:11.859012 | orchestrator | 2025-04-10 01:00:11.859024 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-10 01:00:11.859736 | orchestrator | Thursday 10 April 2025 00:46:35 +0000 (0:00:01.133) 0:00:17.707 ******** 2025-04-10 01:00:11.859761 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.859775 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.859801 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.859814 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.859826 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.859859 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.859873 | orchestrator | 2025-04-10 01:00:11.859886 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-10 01:00:11.859898 | orchestrator | Thursday 10 April 2025 00:46:36 +0000 (0:00:01.919) 0:00:19.627 ******** 2025-04-10 01:00:11.859911 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.860561 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:00:11.860578 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:00:11.860590 | orchestrator | 2025-04-10 01:00:11.860603 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-10 01:00:11.860649 | orchestrator | Thursday 10 April 2025 00:46:39 +0000 (0:00:02.993) 0:00:22.621 ******** 2025-04-10 01:00:11.860705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.860719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.860731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.860744 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.860757 | orchestrator | 2025-04-10 01:00:11.861223 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-10 01:00:11.861259 | orchestrator | Thursday 10 April 2025 00:46:40 +0000 (0:00:00.994) 0:00:23.615 ******** 2025-04-10 01:00:11.861273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861290 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861303 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861316 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.861328 | orchestrator | 2025-04-10 01:00:11.861341 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-10 01:00:11.861353 | orchestrator | Thursday 10 April 2025 00:46:42 +0000 (0:00:02.035) 0:00:25.650 ******** 2025-04-10 01:00:11.861367 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861381 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861394 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861418 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.861431 | orchestrator | 2025-04-10 01:00:11.861444 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-10 01:00:11.861541 | orchestrator | Thursday 10 April 2025 00:46:43 +0000 (0:00:00.479) 0:00:26.130 ******** 2025-04-10 01:00:11.861563 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-10 00:46:37.703967', 'end': '2025-04-10 00:46:37.985032', 'delta': '0:00:00.281065', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-10 00:46:38.599986', 'end': '2025-04-10 00:46:38.873693', 'delta': '0:00:00.273707', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861596 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-10 00:46:39.429930', 'end': '2025-04-10 00:46:39.716499', 'delta': '0:00:00.286569', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-10 01:00:11.861610 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.861623 | orchestrator | 2025-04-10 01:00:11.861637 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-10 01:00:11.861650 | orchestrator | Thursday 10 April 2025 00:46:43 +0000 (0:00:00.464) 0:00:26.594 ******** 2025-04-10 01:00:11.861663 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.861677 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.861690 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.861702 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.861715 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.861728 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.861741 | orchestrator | 2025-04-10 01:00:11.861754 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-10 01:00:11.861767 | orchestrator | Thursday 10 April 2025 00:46:46 +0000 (0:00:02.310) 0:00:28.904 ******** 2025-04-10 01:00:11.861780 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.861793 | orchestrator | 2025-04-10 01:00:11.861806 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-10 01:00:11.861819 | orchestrator | Thursday 10 April 2025 00:46:46 +0000 (0:00:00.745) 0:00:29.650 ******** 2025-04-10 01:00:11.861832 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.861912 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.861926 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.861943 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.861971 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.861988 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862004 | orchestrator | 2025-04-10 01:00:11.862053 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-10 01:00:11.862067 | orchestrator | Thursday 10 April 2025 00:46:48 +0000 (0:00:01.265) 0:00:30.915 ******** 2025-04-10 01:00:11.862077 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862094 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862104 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862114 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862124 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862134 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862144 | orchestrator | 2025-04-10 01:00:11.862155 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-10 01:00:11.862167 | orchestrator | Thursday 10 April 2025 00:46:49 +0000 (0:00:01.506) 0:00:32.421 ******** 2025-04-10 01:00:11.862179 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862190 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862202 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862213 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862225 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862236 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862247 | orchestrator | 2025-04-10 01:00:11.862259 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-10 01:00:11.862270 | orchestrator | Thursday 10 April 2025 00:46:51 +0000 (0:00:01.321) 0:00:33.743 ******** 2025-04-10 01:00:11.862360 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862375 | orchestrator | 2025-04-10 01:00:11.862388 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-10 01:00:11.862399 | orchestrator | Thursday 10 April 2025 00:46:51 +0000 (0:00:00.251) 0:00:33.994 ******** 2025-04-10 01:00:11.862411 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862422 | orchestrator | 2025-04-10 01:00:11.862433 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-10 01:00:11.862445 | orchestrator | Thursday 10 April 2025 00:46:51 +0000 (0:00:00.247) 0:00:34.241 ******** 2025-04-10 01:00:11.862456 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862468 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862479 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862490 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862502 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862513 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862523 | orchestrator | 2025-04-10 01:00:11.862533 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-10 01:00:11.862543 | orchestrator | Thursday 10 April 2025 00:46:52 +0000 (0:00:00.661) 0:00:34.903 ******** 2025-04-10 01:00:11.862553 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862564 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862574 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862584 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862594 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862603 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862614 | orchestrator | 2025-04-10 01:00:11.862624 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-10 01:00:11.862634 | orchestrator | Thursday 10 April 2025 00:46:53 +0000 (0:00:01.249) 0:00:36.153 ******** 2025-04-10 01:00:11.862644 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862654 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862664 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862674 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862684 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862694 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862704 | orchestrator | 2025-04-10 01:00:11.862724 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-10 01:00:11.862734 | orchestrator | Thursday 10 April 2025 00:46:54 +0000 (0:00:01.068) 0:00:37.221 ******** 2025-04-10 01:00:11.862744 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862754 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862765 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862775 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862785 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862795 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862806 | orchestrator | 2025-04-10 01:00:11.862816 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-10 01:00:11.862826 | orchestrator | Thursday 10 April 2025 00:46:55 +0000 (0:00:01.315) 0:00:38.537 ******** 2025-04-10 01:00:11.862836 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862885 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862896 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862906 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.862916 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.862926 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.862936 | orchestrator | 2025-04-10 01:00:11.862947 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-10 01:00:11.862957 | orchestrator | Thursday 10 April 2025 00:46:56 +0000 (0:00:00.842) 0:00:39.379 ******** 2025-04-10 01:00:11.862967 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.862977 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.862987 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.862997 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.863007 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.863017 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.863027 | orchestrator | 2025-04-10 01:00:11.863043 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-10 01:00:11.863053 | orchestrator | Thursday 10 April 2025 00:46:58 +0000 (0:00:01.392) 0:00:40.771 ******** 2025-04-10 01:00:11.863064 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.863074 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.863084 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.863101 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.863112 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.863122 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.863132 | orchestrator | 2025-04-10 01:00:11.863143 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-10 01:00:11.863153 | orchestrator | Thursday 10 April 2025 00:46:59 +0000 (0:00:01.242) 0:00:42.014 ******** 2025-04-10 01:00:11.863164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part1', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part14', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part15', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part16', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d97216ad-03db-4dc0-9fce-19fb462ce1e2', 'scsi-SQEMU_QEMU_HARDDISK_d97216ad-03db-4dc0-9fce-19fb462ce1e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b5996d2-64c6-4dbd-ad82-ae9f8c5fd05f', 'scsi-SQEMU_QEMU_HARDDISK_3b5996d2-64c6-4dbd-ad82-ae9f8c5fd05f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c91147a-8481-48ce-bf49-6c79ed393785', 'scsi-SQEMU_QEMU_HARDDISK_6c91147a-8481-48ce-bf49-6c79ed393785'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5', 'scsi-SQEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part1', 'scsi-SQEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part14', 'scsi-SQEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part15', 'scsi-SQEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part16', 'scsi-SQEMU_QEMU_HARDDISK_1651f83f-0ee2-4e26-b483-3c086e6d5fb5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863681 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.863693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb4c67-862b-4727-95c3-98e3283b8fb6', 'scsi-SQEMU_QEMU_HARDDISK_2fcb4c67-862b-4727-95c3-98e3283b8fb6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_737be83d-5ee5-4854-9988-400b2ee7e7c1', 'scsi-SQEMU_QEMU_HARDDISK_737be83d-5ee5-4854-9988-400b2ee7e7c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53cc8dbb-b824-45fa-a2cb-804fcc96761d', 'scsi-SQEMU_QEMU_HARDDISK_53cc8dbb-b824-45fa-a2cb-804fcc96761d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.863738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863896 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.863907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.863918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3', 'scsi-SQEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ab85bb6-830c-4bb2-a981-885b30070cf3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c760fe92-14ba-404a-b6f7-3b1432fac79b', 'scsi-SQEMU_QEMU_HARDDISK_c760fe92-14ba-404a-b6f7-3b1432fac79b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bfdc91a-8c25-4ce3-95e3-852e7229c9f1', 'scsi-SQEMU_QEMU_HARDDISK_5bfdc91a-8c25-4ce3-95e3-852e7229c9f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7af0ad6a--7281--507c--97d1--7760f3587d37-osd--block--7af0ad6a--7281--507c--97d1--7760f3587d37', 'dm-uuid-LVM-NjIIb6LQMrocij3EZUof8kffa8YMcdMc5e9g1Wb8LVmdWUUgPPS1gxSz6S3506Bt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89a2b2fc-2fff-49a5-ab5e-089af5d983aa', 'scsi-SQEMU_QEMU_HARDDISK_89a2b2fc-2fff-49a5-ab5e-089af5d983aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52286b97--e205--54c6--a29d--cc3afdc4b583-osd--block--52286b97--e205--54c6--a29d--cc3afdc4b583', 'dm-uuid-LVM-HPSxuZ8DHDJ8ZqK8wEV3v0eAT4VCXs4Bx9O8QX2FWr2BjoMNYw0yToUCJN6qRdTD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part1', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part14', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part15', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part16', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7af0ad6a--7281--507c--97d1--7760f3587d37-osd--block--7af0ad6a--7281--507c--97d1--7760f3587d37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p3XfEl-mz29-Jpky-r1OI-scKs-HVYX-H6St0W', 'scsi-0QEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7', 'scsi-SQEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--52286b97--e205--54c6--a29d--cc3afdc4b583-osd--block--52286b97--e205--54c6--a29d--cc3afdc4b583'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xbmZTz-LB8C-hBHE-q55R-kKbE-kPyk-uviAl0', 'scsi-0QEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3', 'scsi-SQEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2', 'scsi-SQEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864371 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.864382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e6570ad4--669c--53e9--93b8--24292f6b58fb-osd--block--e6570ad4--669c--53e9--93b8--24292f6b58fb', 'dm-uuid-LVM-g5uaQZJhqiIdcOYI8y1QX1dMjLzoaopSwfJSl0BpmhpQ35uuYncttW99JjewdXwF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--543b72d2--41b4--5023--b438--6662cb79109c-osd--block--543b72d2--41b4--5023--b438--6662cb79109c', 'dm-uuid-LVM-uaEs4yIc9u0sB5SzmJQhTTBLeir2in1damUAgW90tEPCYeVhVMEZ3tCsUHO3rvIT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864567 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.864628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e6570ad4--669c--53e9--93b8--24292f6b58fb-osd--block--e6570ad4--669c--53e9--93b8--24292f6b58fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y0A54c-DQxh-euLG-jj02-m3iO-QWD4-7QmJ9P', 'scsi-0QEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6', 'scsi-SQEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--543b72d2--41b4--5023--b438--6662cb79109c-osd--block--543b72d2--41b4--5023--b438--6662cb79109c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tCYsbb-8M5x-ZLLr-FLxc-Liar-cDie-nSwqm0', 'scsi-0QEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238', 'scsi-SQEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a', 'scsi-SQEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.864778 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.864789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47ce51ce--522f--5092--939d--97f529b04c78-osd--block--47ce51ce--522f--5092--939d--97f529b04c78', 'dm-uuid-LVM-4CDMFffTI1LTGWl8RXFR68tCtFdmcX9htiCP5EJoAD2X7cCUVwtP3sFnI33pMg1p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1024c186--728b--5ddc--b380--e3967fe3a792-osd--block--1024c186--728b--5ddc--b380--e3967fe3a792', 'dm-uuid-LVM-Y00XykzfuS4D5SX65I650rApYtaExU63DS3FPv6iOvr226g6HlMCZezHKcJljJG2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.864999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.865015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:00:11.865027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.865118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--47ce51ce--522f--5092--939d--97f529b04c78-osd--block--47ce51ce--522f--5092--939d--97f529b04c78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GeYnBq-RSN3-X1LM-vEoe-Z3mQ-fDse-1VUCb1', 'scsi-0QEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1', 'scsi-SQEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.865135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1024c186--728b--5ddc--b380--e3967fe3a792-osd--block--1024c186--728b--5ddc--b380--e3967fe3a792'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZzVfzF-Bh0M-b68W-hhnT-aaIn-Ppt1-K83hYV', 'scsi-0QEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644', 'scsi-SQEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.865146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172', 'scsi-SQEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.865157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:00:11.865174 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.865184 | orchestrator | 2025-04-10 01:00:11.865195 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-10 01:00:11.865205 | orchestrator | Thursday 10 April 2025 00:47:02 +0000 (0:00:02.697) 0:00:44.711 ******** 2025-04-10 01:00:11.865215 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865224 | orchestrator | 2025-04-10 01:00:11.865232 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-10 01:00:11.865241 | orchestrator | Thursday 10 April 2025 00:47:02 +0000 (0:00:00.343) 0:00:45.055 ******** 2025-04-10 01:00:11.865249 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865258 | orchestrator | 2025-04-10 01:00:11.865266 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-10 01:00:11.865275 | orchestrator | Thursday 10 April 2025 00:47:02 +0000 (0:00:00.184) 0:00:45.239 ******** 2025-04-10 01:00:11.865283 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865292 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.865301 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.865309 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.865318 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.865326 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.865335 | orchestrator | 2025-04-10 01:00:11.865343 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-10 01:00:11.865352 | orchestrator | Thursday 10 April 2025 00:47:03 +0000 (0:00:01.086) 0:00:46.326 ******** 2025-04-10 01:00:11.865360 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.865369 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.865377 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.865386 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.865394 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.865403 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.865411 | orchestrator | 2025-04-10 01:00:11.865420 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-10 01:00:11.865428 | orchestrator | Thursday 10 April 2025 00:47:05 +0000 (0:00:01.995) 0:00:48.321 ******** 2025-04-10 01:00:11.865437 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.865445 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.865454 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.865462 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.865471 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.865479 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.865488 | orchestrator | 2025-04-10 01:00:11.865497 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-10 01:00:11.865516 | orchestrator | Thursday 10 April 2025 00:47:06 +0000 (0:00:00.926) 0:00:49.248 ******** 2025-04-10 01:00:11.865524 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865533 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.865541 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.865550 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.865558 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.865610 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.865623 | orchestrator | 2025-04-10 01:00:11.865632 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-10 01:00:11.865641 | orchestrator | Thursday 10 April 2025 00:47:08 +0000 (0:00:01.996) 0:00:51.244 ******** 2025-04-10 01:00:11.865650 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865660 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.865669 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.865683 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.865693 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.865702 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.865711 | orchestrator | 2025-04-10 01:00:11.865721 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-10 01:00:11.865730 | orchestrator | Thursday 10 April 2025 00:47:09 +0000 (0:00:00.981) 0:00:52.226 ******** 2025-04-10 01:00:11.865739 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865748 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.865757 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.865766 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.865775 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.865784 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.865793 | orchestrator | 2025-04-10 01:00:11.865803 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-10 01:00:11.865812 | orchestrator | Thursday 10 April 2025 00:47:10 +0000 (0:00:01.255) 0:00:53.481 ******** 2025-04-10 01:00:11.865821 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865830 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.865853 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.865866 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.865875 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.865883 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.865891 | orchestrator | 2025-04-10 01:00:11.865900 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-10 01:00:11.865909 | orchestrator | Thursday 10 April 2025 00:47:11 +0000 (0:00:00.814) 0:00:54.296 ******** 2025-04-10 01:00:11.865917 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.865926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.865934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.865943 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-10 01:00:11.865952 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.865960 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-10 01:00:11.865969 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-10 01:00:11.865977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-10 01:00:11.865986 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.865995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-10 01:00:11.866007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:00:11.866040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:00:11.866051 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:00:11.866060 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-10 01:00:11.866068 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.866077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:00:11.866085 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.866094 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:00:11.866102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:00:11.866111 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:00:11.866119 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.866128 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:00:11.866136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:00:11.866145 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.866153 | orchestrator | 2025-04-10 01:00:11.866162 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-10 01:00:11.866170 | orchestrator | Thursday 10 April 2025 00:47:15 +0000 (0:00:03.581) 0:00:57.878 ******** 2025-04-10 01:00:11.866184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.866193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.866202 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-10 01:00:11.866210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.866219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-10 01:00:11.866227 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-10 01:00:11.866236 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.866245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-10 01:00:11.866253 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-10 01:00:11.866262 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.866270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:00:11.866279 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:00:11.866287 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-10 01:00:11.866296 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.866304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:00:11.866313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:00:11.866322 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:00:11.866330 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:00:11.866350 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.866359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:00:11.866418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:00:11.866431 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:00:11.866441 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.866450 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.866459 | orchestrator | 2025-04-10 01:00:11.866469 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-10 01:00:11.866478 | orchestrator | Thursday 10 April 2025 00:47:20 +0000 (0:00:05.028) 0:01:02.906 ******** 2025-04-10 01:00:11.866487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.866496 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-04-10 01:00:11.866505 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-04-10 01:00:11.866515 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-10 01:00:11.866524 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-10 01:00:11.866533 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-10 01:00:11.866543 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-04-10 01:00:11.866552 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-10 01:00:11.866561 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-04-10 01:00:11.866570 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-04-10 01:00:11.866579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-10 01:00:11.866588 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-10 01:00:11.866598 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-10 01:00:11.866607 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-04-10 01:00:11.866616 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-10 01:00:11.866625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-10 01:00:11.866634 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-10 01:00:11.866644 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-10 01:00:11.866653 | orchestrator | 2025-04-10 01:00:11.866662 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-10 01:00:11.866671 | orchestrator | Thursday 10 April 2025 00:47:28 +0000 (0:00:08.359) 0:01:11.266 ******** 2025-04-10 01:00:11.866688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.866697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.866707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.866716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-10 01:00:11.866725 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.866734 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-10 01:00:11.866743 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-10 01:00:11.866752 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-10 01:00:11.866761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-10 01:00:11.866771 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-10 01:00:11.866780 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.866793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:00:11.866803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:00:11.866812 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.866821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:00:11.866830 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:00:11.866853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:00:11.866862 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.866871 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:00:11.866879 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.866888 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:00:11.866896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:00:11.866905 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:00:11.866913 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.866922 | orchestrator | 2025-04-10 01:00:11.866931 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-10 01:00:11.866939 | orchestrator | Thursday 10 April 2025 00:47:30 +0000 (0:00:01.473) 0:01:12.739 ******** 2025-04-10 01:00:11.866948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.866960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.866969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.866977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-10 01:00:11.866986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-10 01:00:11.866994 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.867003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-10 01:00:11.867011 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.867020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-10 01:00:11.867029 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-10 01:00:11.867037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-10 01:00:11.867046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:00:11.867054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:00:11.867063 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.867071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:00:11.867080 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:00:11.867135 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:00:11.867148 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:00:11.867166 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.867176 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:00:11.867191 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:00:11.867201 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:00:11.867210 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.867223 | orchestrator | 2025-04-10 01:00:11.867233 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-10 01:00:11.867242 | orchestrator | Thursday 10 April 2025 00:47:31 +0000 (0:00:01.478) 0:01:14.218 ******** 2025-04-10 01:00:11.867252 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-10 01:00:11.867261 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:00:11.867271 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:00:11.867280 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:00:11.867289 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-04-10 01:00:11.867299 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:00:11.867308 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:00:11.867317 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:00:11.867327 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-04-10 01:00:11.867336 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:00:11.867345 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:00:11.867355 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:00:11.867364 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:00:11.867373 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:00:11.867383 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:00:11.867392 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867401 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.867411 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:00:11.867420 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:00:11.867429 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:00:11.867438 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.867448 | orchestrator | 2025-04-10 01:00:11.867457 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-10 01:00:11.867466 | orchestrator | Thursday 10 April 2025 00:47:33 +0000 (0:00:01.879) 0:01:16.097 ******** 2025-04-10 01:00:11.867476 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.867485 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.867494 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.867504 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.867513 | orchestrator | 2025-04-10 01:00:11.867522 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.867532 | orchestrator | Thursday 10 April 2025 00:47:35 +0000 (0:00:01.828) 0:01:17.925 ******** 2025-04-10 01:00:11.867541 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867550 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.867564 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.867573 | orchestrator | 2025-04-10 01:00:11.867582 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.867592 | orchestrator | Thursday 10 April 2025 00:47:36 +0000 (0:00:00.839) 0:01:18.764 ******** 2025-04-10 01:00:11.867601 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867610 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.867619 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.867628 | orchestrator | 2025-04-10 01:00:11.867637 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.867647 | orchestrator | Thursday 10 April 2025 00:47:37 +0000 (0:00:01.248) 0:01:20.013 ******** 2025-04-10 01:00:11.867656 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867665 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.867674 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.867683 | orchestrator | 2025-04-10 01:00:11.867692 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.867702 | orchestrator | Thursday 10 April 2025 00:47:38 +0000 (0:00:00.787) 0:01:20.801 ******** 2025-04-10 01:00:11.867711 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.867720 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.867729 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.867738 | orchestrator | 2025-04-10 01:00:11.867748 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.867817 | orchestrator | Thursday 10 April 2025 00:47:39 +0000 (0:00:01.045) 0:01:21.846 ******** 2025-04-10 01:00:11.867831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.867854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.867864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.867875 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867885 | orchestrator | 2025-04-10 01:00:11.867894 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.867904 | orchestrator | Thursday 10 April 2025 00:47:39 +0000 (0:00:00.831) 0:01:22.678 ******** 2025-04-10 01:00:11.867914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.867924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.867933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.867943 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.867953 | orchestrator | 2025-04-10 01:00:11.867963 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.867973 | orchestrator | Thursday 10 April 2025 00:47:40 +0000 (0:00:00.936) 0:01:23.615 ******** 2025-04-10 01:00:11.867982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.867992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.868002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.868011 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868025 | orchestrator | 2025-04-10 01:00:11.868035 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.868045 | orchestrator | Thursday 10 April 2025 00:47:41 +0000 (0:00:00.696) 0:01:24.311 ******** 2025-04-10 01:00:11.868055 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.868065 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.868076 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.868089 | orchestrator | 2025-04-10 01:00:11.868100 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.868110 | orchestrator | Thursday 10 April 2025 00:47:42 +0000 (0:00:00.680) 0:01:24.992 ******** 2025-04-10 01:00:11.868120 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-10 01:00:11.868129 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-10 01:00:11.868138 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-10 01:00:11.868152 | orchestrator | 2025-04-10 01:00:11.868161 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.868169 | orchestrator | Thursday 10 April 2025 00:47:44 +0000 (0:00:01.940) 0:01:26.932 ******** 2025-04-10 01:00:11.868178 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868186 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868195 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868203 | orchestrator | 2025-04-10 01:00:11.868212 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.868221 | orchestrator | Thursday 10 April 2025 00:47:45 +0000 (0:00:00.793) 0:01:27.726 ******** 2025-04-10 01:00:11.868229 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868238 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868247 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868255 | orchestrator | 2025-04-10 01:00:11.868264 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.868272 | orchestrator | Thursday 10 April 2025 00:47:46 +0000 (0:00:01.190) 0:01:28.916 ******** 2025-04-10 01:00:11.868281 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.868290 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868299 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.868307 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868316 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.868325 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868333 | orchestrator | 2025-04-10 01:00:11.868342 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.868351 | orchestrator | Thursday 10 April 2025 00:47:47 +0000 (0:00:00.857) 0:01:29.774 ******** 2025-04-10 01:00:11.868359 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.868368 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868377 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.868386 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868394 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.868403 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868412 | orchestrator | 2025-04-10 01:00:11.868426 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.868435 | orchestrator | Thursday 10 April 2025 00:47:47 +0000 (0:00:00.872) 0:01:30.646 ******** 2025-04-10 01:00:11.868444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.868453 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.868461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.868470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.868478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.868487 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.868504 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868513 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.868521 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.868576 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.868588 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868597 | orchestrator | 2025-04-10 01:00:11.868606 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-10 01:00:11.868615 | orchestrator | Thursday 10 April 2025 00:47:49 +0000 (0:00:01.247) 0:01:31.894 ******** 2025-04-10 01:00:11.868630 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.868638 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.868647 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.868655 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868664 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868673 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868681 | orchestrator | 2025-04-10 01:00:11.868690 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-10 01:00:11.868698 | orchestrator | Thursday 10 April 2025 00:47:50 +0000 (0:00:01.202) 0:01:33.097 ******** 2025-04-10 01:00:11.868707 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.868716 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:00:11.868725 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:00:11.868733 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-10 01:00:11.868742 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-10 01:00:11.868750 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-10 01:00:11.868759 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-10 01:00:11.868767 | orchestrator | 2025-04-10 01:00:11.868776 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-10 01:00:11.868785 | orchestrator | Thursday 10 April 2025 00:47:51 +0000 (0:00:00.987) 0:01:34.085 ******** 2025-04-10 01:00:11.868793 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.868802 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:00:11.868810 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:00:11.868819 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-10 01:00:11.868828 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-10 01:00:11.868837 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-10 01:00:11.868858 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-10 01:00:11.868867 | orchestrator | 2025-04-10 01:00:11.868875 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.868884 | orchestrator | Thursday 10 April 2025 00:47:54 +0000 (0:00:02.733) 0:01:36.818 ******** 2025-04-10 01:00:11.868893 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.868902 | orchestrator | 2025-04-10 01:00:11.868911 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.868920 | orchestrator | Thursday 10 April 2025 00:47:56 +0000 (0:00:01.976) 0:01:38.794 ******** 2025-04-10 01:00:11.868928 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.868937 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.868946 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.868954 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.868963 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.868971 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.868980 | orchestrator | 2025-04-10 01:00:11.868989 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.868998 | orchestrator | Thursday 10 April 2025 00:47:57 +0000 (0:00:01.093) 0:01:39.887 ******** 2025-04-10 01:00:11.869006 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869015 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869023 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869047 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.869056 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.869065 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.869073 | orchestrator | 2025-04-10 01:00:11.869082 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.869091 | orchestrator | Thursday 10 April 2025 00:47:58 +0000 (0:00:01.698) 0:01:41.586 ******** 2025-04-10 01:00:11.869099 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869108 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869117 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869125 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.869134 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.869142 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.869151 | orchestrator | 2025-04-10 01:00:11.869160 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.869169 | orchestrator | Thursday 10 April 2025 00:48:00 +0000 (0:00:01.281) 0:01:42.868 ******** 2025-04-10 01:00:11.869178 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869186 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869195 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869204 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.869212 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.869221 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.869229 | orchestrator | 2025-04-10 01:00:11.869238 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.869247 | orchestrator | Thursday 10 April 2025 00:48:01 +0000 (0:00:01.347) 0:01:44.215 ******** 2025-04-10 01:00:11.869255 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.869264 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869325 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.869342 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869352 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869361 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.869371 | orchestrator | 2025-04-10 01:00:11.869380 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.869389 | orchestrator | Thursday 10 April 2025 00:48:02 +0000 (0:00:00.931) 0:01:45.147 ******** 2025-04-10 01:00:11.869399 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869408 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869417 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869426 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869435 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869444 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869453 | orchestrator | 2025-04-10 01:00:11.869462 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.869471 | orchestrator | Thursday 10 April 2025 00:48:03 +0000 (0:00:00.930) 0:01:46.078 ******** 2025-04-10 01:00:11.869481 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869490 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869499 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869508 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869517 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869526 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869535 | orchestrator | 2025-04-10 01:00:11.869544 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.869553 | orchestrator | Thursday 10 April 2025 00:48:04 +0000 (0:00:00.677) 0:01:46.755 ******** 2025-04-10 01:00:11.869563 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869572 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869581 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869590 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869599 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869608 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869617 | orchestrator | 2025-04-10 01:00:11.869626 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.869640 | orchestrator | Thursday 10 April 2025 00:48:05 +0000 (0:00:01.158) 0:01:47.914 ******** 2025-04-10 01:00:11.869650 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869659 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869668 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869677 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869686 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869695 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869704 | orchestrator | 2025-04-10 01:00:11.869713 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.869722 | orchestrator | Thursday 10 April 2025 00:48:06 +0000 (0:00:00.893) 0:01:48.807 ******** 2025-04-10 01:00:11.869731 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869741 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869750 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869759 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869768 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869777 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869786 | orchestrator | 2025-04-10 01:00:11.869795 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.869805 | orchestrator | Thursday 10 April 2025 00:48:07 +0000 (0:00:01.170) 0:01:49.977 ******** 2025-04-10 01:00:11.869814 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.869823 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.869832 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.869883 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.869893 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.869902 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.869910 | orchestrator | 2025-04-10 01:00:11.869919 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.869928 | orchestrator | Thursday 10 April 2025 00:48:08 +0000 (0:00:01.197) 0:01:51.175 ******** 2025-04-10 01:00:11.869936 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.869945 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.869954 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.869962 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.869971 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.869979 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.869988 | orchestrator | 2025-04-10 01:00:11.869996 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.870005 | orchestrator | Thursday 10 April 2025 00:48:09 +0000 (0:00:00.863) 0:01:52.038 ******** 2025-04-10 01:00:11.870013 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.870047 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.870056 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.870065 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870073 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870082 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870090 | orchestrator | 2025-04-10 01:00:11.870099 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.870108 | orchestrator | Thursday 10 April 2025 00:48:09 +0000 (0:00:00.626) 0:01:52.665 ******** 2025-04-10 01:00:11.870116 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870131 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870140 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870149 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.870158 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.870167 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.870175 | orchestrator | 2025-04-10 01:00:11.870184 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.870192 | orchestrator | Thursday 10 April 2025 00:48:10 +0000 (0:00:00.976) 0:01:53.642 ******** 2025-04-10 01:00:11.870201 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870215 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870223 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870232 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.870240 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.870249 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.870258 | orchestrator | 2025-04-10 01:00:11.870266 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.870329 | orchestrator | Thursday 10 April 2025 00:48:11 +0000 (0:00:00.687) 0:01:54.329 ******** 2025-04-10 01:00:11.870342 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870351 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870359 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870368 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.870376 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.870385 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.870393 | orchestrator | 2025-04-10 01:00:11.870402 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.870411 | orchestrator | Thursday 10 April 2025 00:48:12 +0000 (0:00:01.055) 0:01:55.385 ******** 2025-04-10 01:00:11.870419 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870428 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870436 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870445 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870453 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870461 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870469 | orchestrator | 2025-04-10 01:00:11.870477 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.870485 | orchestrator | Thursday 10 April 2025 00:48:13 +0000 (0:00:00.897) 0:01:56.282 ******** 2025-04-10 01:00:11.870492 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870500 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870508 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870516 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870524 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870532 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870540 | orchestrator | 2025-04-10 01:00:11.870547 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.870556 | orchestrator | Thursday 10 April 2025 00:48:14 +0000 (0:00:01.142) 0:01:57.425 ******** 2025-04-10 01:00:11.870564 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.870571 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.870579 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.870587 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870595 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870603 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870611 | orchestrator | 2025-04-10 01:00:11.870619 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.870627 | orchestrator | Thursday 10 April 2025 00:48:15 +0000 (0:00:00.621) 0:01:58.047 ******** 2025-04-10 01:00:11.870636 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.870644 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.870651 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.870659 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.870667 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.870675 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.870683 | orchestrator | 2025-04-10 01:00:11.870691 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.870703 | orchestrator | Thursday 10 April 2025 00:48:16 +0000 (0:00:00.928) 0:01:58.976 ******** 2025-04-10 01:00:11.870711 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870719 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870727 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870735 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870743 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870756 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870764 | orchestrator | 2025-04-10 01:00:11.870772 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.870780 | orchestrator | Thursday 10 April 2025 00:48:16 +0000 (0:00:00.637) 0:01:59.614 ******** 2025-04-10 01:00:11.870788 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870796 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870815 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870827 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870835 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870857 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870866 | orchestrator | 2025-04-10 01:00:11.870874 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.870882 | orchestrator | Thursday 10 April 2025 00:48:17 +0000 (0:00:00.977) 0:02:00.592 ******** 2025-04-10 01:00:11.870889 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870897 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870905 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870913 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870921 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.870929 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.870936 | orchestrator | 2025-04-10 01:00:11.870945 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.870952 | orchestrator | Thursday 10 April 2025 00:48:18 +0000 (0:00:00.684) 0:02:01.276 ******** 2025-04-10 01:00:11.870960 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.870968 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.870976 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.870984 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.870992 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871000 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871008 | orchestrator | 2025-04-10 01:00:11.871017 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.871025 | orchestrator | Thursday 10 April 2025 00:48:19 +0000 (0:00:00.898) 0:02:02.175 ******** 2025-04-10 01:00:11.871033 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871041 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871049 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871057 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871065 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871072 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871080 | orchestrator | 2025-04-10 01:00:11.871088 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.871096 | orchestrator | Thursday 10 April 2025 00:48:20 +0000 (0:00:00.685) 0:02:02.860 ******** 2025-04-10 01:00:11.871104 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871112 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871120 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871128 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871136 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871143 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871151 | orchestrator | 2025-04-10 01:00:11.871206 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.871217 | orchestrator | Thursday 10 April 2025 00:48:21 +0000 (0:00:00.921) 0:02:03.782 ******** 2025-04-10 01:00:11.871226 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871234 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871242 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871251 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871259 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871267 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871276 | orchestrator | 2025-04-10 01:00:11.871284 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.871298 | orchestrator | Thursday 10 April 2025 00:48:21 +0000 (0:00:00.787) 0:02:04.569 ******** 2025-04-10 01:00:11.871306 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871314 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871323 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871331 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871339 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871347 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871356 | orchestrator | 2025-04-10 01:00:11.871364 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.871373 | orchestrator | Thursday 10 April 2025 00:48:22 +0000 (0:00:01.067) 0:02:05.637 ******** 2025-04-10 01:00:11.871381 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871390 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871398 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871406 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871415 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871423 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871431 | orchestrator | 2025-04-10 01:00:11.871440 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.871448 | orchestrator | Thursday 10 April 2025 00:48:23 +0000 (0:00:01.017) 0:02:06.654 ******** 2025-04-10 01:00:11.871457 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871465 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871473 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871481 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871490 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871498 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871510 | orchestrator | 2025-04-10 01:00:11.871519 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.871527 | orchestrator | Thursday 10 April 2025 00:48:25 +0000 (0:00:01.549) 0:02:08.204 ******** 2025-04-10 01:00:11.871536 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871544 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871552 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871561 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871569 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871577 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871586 | orchestrator | 2025-04-10 01:00:11.871594 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.871603 | orchestrator | Thursday 10 April 2025 00:48:26 +0000 (0:00:00.680) 0:02:08.884 ******** 2025-04-10 01:00:11.871611 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871619 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871628 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871636 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871644 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871652 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871661 | orchestrator | 2025-04-10 01:00:11.871669 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.871678 | orchestrator | Thursday 10 April 2025 00:48:27 +0000 (0:00:00.916) 0:02:09.800 ******** 2025-04-10 01:00:11.871686 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.871695 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.871703 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.871711 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.871720 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871728 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.871736 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.871744 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871758 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.871766 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.871774 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.871783 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.871791 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.871799 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.871807 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.871816 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.871827 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.871836 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.871857 | orchestrator | 2025-04-10 01:00:11.871865 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.871874 | orchestrator | Thursday 10 April 2025 00:48:27 +0000 (0:00:00.832) 0:02:10.632 ******** 2025-04-10 01:00:11.871882 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-10 01:00:11.871890 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-10 01:00:11.871897 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.871905 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-10 01:00:11.871913 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-10 01:00:11.871921 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.871929 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-10 01:00:11.871937 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-10 01:00:11.871989 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872001 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-10 01:00:11.872010 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-10 01:00:11.872018 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872027 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-10 01:00:11.872035 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-10 01:00:11.872044 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872052 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-10 01:00:11.872060 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-10 01:00:11.872069 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872077 | orchestrator | 2025-04-10 01:00:11.872086 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.872094 | orchestrator | Thursday 10 April 2025 00:48:28 +0000 (0:00:00.944) 0:02:11.577 ******** 2025-04-10 01:00:11.872102 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872111 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872119 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872128 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872136 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872145 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872153 | orchestrator | 2025-04-10 01:00:11.872162 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.872170 | orchestrator | Thursday 10 April 2025 00:48:29 +0000 (0:00:00.670) 0:02:12.247 ******** 2025-04-10 01:00:11.872178 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872187 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872195 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872204 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872212 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872220 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872229 | orchestrator | 2025-04-10 01:00:11.872237 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.872246 | orchestrator | Thursday 10 April 2025 00:48:30 +0000 (0:00:00.890) 0:02:13.137 ******** 2025-04-10 01:00:11.872262 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872271 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872279 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872288 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872296 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872304 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872313 | orchestrator | 2025-04-10 01:00:11.872321 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.872330 | orchestrator | Thursday 10 April 2025 00:48:31 +0000 (0:00:00.687) 0:02:13.825 ******** 2025-04-10 01:00:11.872338 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872375 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872384 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872392 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872400 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872408 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872416 | orchestrator | 2025-04-10 01:00:11.872424 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.872432 | orchestrator | Thursday 10 April 2025 00:48:32 +0000 (0:00:00.901) 0:02:14.727 ******** 2025-04-10 01:00:11.872440 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872448 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872456 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872468 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872476 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872484 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872492 | orchestrator | 2025-04-10 01:00:11.872504 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.872512 | orchestrator | Thursday 10 April 2025 00:48:32 +0000 (0:00:00.650) 0:02:15.377 ******** 2025-04-10 01:00:11.872520 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872528 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872535 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872543 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872551 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872559 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872567 | orchestrator | 2025-04-10 01:00:11.872575 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.872583 | orchestrator | Thursday 10 April 2025 00:48:33 +0000 (0:00:00.898) 0:02:16.276 ******** 2025-04-10 01:00:11.872591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.872599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.872607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.872615 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872623 | orchestrator | 2025-04-10 01:00:11.872633 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.872642 | orchestrator | Thursday 10 April 2025 00:48:34 +0000 (0:00:00.433) 0:02:16.710 ******** 2025-04-10 01:00:11.872651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.872661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.872670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.872679 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872688 | orchestrator | 2025-04-10 01:00:11.872697 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.872706 | orchestrator | Thursday 10 April 2025 00:48:34 +0000 (0:00:00.448) 0:02:17.158 ******** 2025-04-10 01:00:11.872715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.872724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.872733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.872792 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872804 | orchestrator | 2025-04-10 01:00:11.872813 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.872822 | orchestrator | Thursday 10 April 2025 00:48:34 +0000 (0:00:00.421) 0:02:17.580 ******** 2025-04-10 01:00:11.872866 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872877 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872886 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.872895 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.872905 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.872914 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.872923 | orchestrator | 2025-04-10 01:00:11.872932 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.872941 | orchestrator | Thursday 10 April 2025 00:48:35 +0000 (0:00:00.930) 0:02:18.510 ******** 2025-04-10 01:00:11.872950 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.872959 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.872968 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.872977 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.872987 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.872995 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873003 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.873011 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873019 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.873027 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873035 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.873043 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873051 | orchestrator | 2025-04-10 01:00:11.873059 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.873067 | orchestrator | Thursday 10 April 2025 00:48:36 +0000 (0:00:00.970) 0:02:19.481 ******** 2025-04-10 01:00:11.873075 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873083 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873091 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873099 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873107 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873115 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873123 | orchestrator | 2025-04-10 01:00:11.873131 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.873139 | orchestrator | Thursday 10 April 2025 00:48:37 +0000 (0:00:00.900) 0:02:20.382 ******** 2025-04-10 01:00:11.873147 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873155 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873163 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873171 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873179 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873187 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873194 | orchestrator | 2025-04-10 01:00:11.873202 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.873210 | orchestrator | Thursday 10 April 2025 00:48:38 +0000 (0:00:00.691) 0:02:21.073 ******** 2025-04-10 01:00:11.873220 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.873232 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873246 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.873257 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873268 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.873279 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873291 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.873303 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873315 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.873335 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873349 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.873362 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873371 | orchestrator | 2025-04-10 01:00:11.873379 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.873387 | orchestrator | Thursday 10 April 2025 00:48:39 +0000 (0:00:01.076) 0:02:22.149 ******** 2025-04-10 01:00:11.873394 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873403 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873411 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873419 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.873427 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873438 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.873447 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873455 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.873463 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873471 | orchestrator | 2025-04-10 01:00:11.873479 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.873487 | orchestrator | Thursday 10 April 2025 00:48:40 +0000 (0:00:00.719) 0:02:22.868 ******** 2025-04-10 01:00:11.873495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.873502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.873511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.873519 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873527 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-10 01:00:11.873534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-10 01:00:11.873543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-10 01:00:11.873550 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-10 01:00:11.873627 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-10 01:00:11.873639 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-10 01:00:11.873647 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.873665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.873673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.873682 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.873690 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.873699 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.873707 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873721 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873730 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.873738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.873747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.873756 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873764 | orchestrator | 2025-04-10 01:00:11.873773 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.873781 | orchestrator | Thursday 10 April 2025 00:48:41 +0000 (0:00:01.729) 0:02:24.598 ******** 2025-04-10 01:00:11.873790 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873798 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873811 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873820 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873828 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873837 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873887 | orchestrator | 2025-04-10 01:00:11.873896 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.873904 | orchestrator | Thursday 10 April 2025 00:48:43 +0000 (0:00:01.491) 0:02:26.089 ******** 2025-04-10 01:00:11.873912 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.873920 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.873928 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.873936 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.873944 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.873952 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.873961 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.873968 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.873975 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.873982 | orchestrator | 2025-04-10 01:00:11.873989 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.873996 | orchestrator | Thursday 10 April 2025 00:48:45 +0000 (0:00:01.644) 0:02:27.734 ******** 2025-04-10 01:00:11.874003 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874010 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.874037 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.874046 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.874054 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.874061 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.874068 | orchestrator | 2025-04-10 01:00:11.874075 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.874082 | orchestrator | Thursday 10 April 2025 00:48:46 +0000 (0:00:01.534) 0:02:29.269 ******** 2025-04-10 01:00:11.874089 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874096 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.874103 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.874125 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.874133 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.874140 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.874147 | orchestrator | 2025-04-10 01:00:11.874154 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-04-10 01:00:11.874161 | orchestrator | Thursday 10 April 2025 00:48:48 +0000 (0:00:01.597) 0:02:30.866 ******** 2025-04-10 01:00:11.874168 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.874175 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.874182 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.874188 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.874195 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.874202 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.874209 | orchestrator | 2025-04-10 01:00:11.874220 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-04-10 01:00:11.874227 | orchestrator | Thursday 10 April 2025 00:48:50 +0000 (0:00:02.182) 0:02:33.049 ******** 2025-04-10 01:00:11.874234 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.874241 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.874248 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.874255 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.874263 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.874270 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.874278 | orchestrator | 2025-04-10 01:00:11.874286 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-04-10 01:00:11.874293 | orchestrator | Thursday 10 April 2025 00:48:52 +0000 (0:00:02.118) 0:02:35.167 ******** 2025-04-10 01:00:11.874302 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.874319 | orchestrator | 2025-04-10 01:00:11.874327 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-04-10 01:00:11.874335 | orchestrator | Thursday 10 April 2025 00:48:53 +0000 (0:00:01.300) 0:02:36.468 ******** 2025-04-10 01:00:11.874342 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874350 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.874358 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.874366 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.874374 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.874381 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.874389 | orchestrator | 2025-04-10 01:00:11.874445 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-04-10 01:00:11.874456 | orchestrator | Thursday 10 April 2025 00:48:54 +0000 (0:00:00.887) 0:02:37.355 ******** 2025-04-10 01:00:11.874464 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874473 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.874481 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.874489 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.874497 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.874505 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.874518 | orchestrator | 2025-04-10 01:00:11.874526 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-04-10 01:00:11.874535 | orchestrator | Thursday 10 April 2025 00:48:55 +0000 (0:00:00.667) 0:02:38.023 ******** 2025-04-10 01:00:11.874543 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-10 01:00:11.874551 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-10 01:00:11.874559 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-10 01:00:11.874567 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-10 01:00:11.874575 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-10 01:00:11.874583 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-10 01:00:11.874591 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-10 01:00:11.874599 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-10 01:00:11.874607 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-10 01:00:11.874616 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-10 01:00:11.874623 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-10 01:00:11.874631 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-10 01:00:11.874638 | orchestrator | 2025-04-10 01:00:11.874645 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-04-10 01:00:11.874652 | orchestrator | Thursday 10 April 2025 00:48:57 +0000 (0:00:01.675) 0:02:39.699 ******** 2025-04-10 01:00:11.874659 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.874666 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.874673 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.874680 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.874688 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.874695 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.874702 | orchestrator | 2025-04-10 01:00:11.874709 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-04-10 01:00:11.874716 | orchestrator | Thursday 10 April 2025 00:48:58 +0000 (0:00:01.449) 0:02:41.148 ******** 2025-04-10 01:00:11.874723 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874731 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.874742 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.874750 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.874757 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.874764 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.874771 | orchestrator | 2025-04-10 01:00:11.874779 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-04-10 01:00:11.874786 | orchestrator | Thursday 10 April 2025 00:48:59 +0000 (0:00:01.041) 0:02:42.190 ******** 2025-04-10 01:00:11.874793 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874800 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.874808 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.874815 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.874822 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.874829 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.874836 | orchestrator | 2025-04-10 01:00:11.874856 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-04-10 01:00:11.874863 | orchestrator | Thursday 10 April 2025 00:49:00 +0000 (0:00:00.656) 0:02:42.846 ******** 2025-04-10 01:00:11.874870 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.874877 | orchestrator | 2025-04-10 01:00:11.874884 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-04-10 01:00:11.874891 | orchestrator | Thursday 10 April 2025 00:49:01 +0000 (0:00:01.491) 0:02:44.338 ******** 2025-04-10 01:00:11.874898 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.874905 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.874912 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.874919 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.874926 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.874933 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.874940 | orchestrator | 2025-04-10 01:00:11.874954 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-04-10 01:00:11.874961 | orchestrator | Thursday 10 April 2025 00:49:32 +0000 (0:00:30.475) 0:03:14.813 ******** 2025-04-10 01:00:11.874968 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-10 01:00:11.874975 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-10 01:00:11.874982 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-10 01:00:11.874989 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.874996 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-10 01:00:11.875003 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-10 01:00:11.875049 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-10 01:00:11.875059 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875066 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-10 01:00:11.875073 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-10 01:00:11.875080 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-10 01:00:11.875087 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875094 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-10 01:00:11.875101 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-10 01:00:11.875108 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-10 01:00:11.875115 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875122 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-10 01:00:11.875129 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-10 01:00:11.875140 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-10 01:00:11.875148 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875155 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-10 01:00:11.875162 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-10 01:00:11.875169 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-10 01:00:11.875176 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875183 | orchestrator | 2025-04-10 01:00:11.875190 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-04-10 01:00:11.875197 | orchestrator | Thursday 10 April 2025 00:49:33 +0000 (0:00:01.064) 0:03:15.878 ******** 2025-04-10 01:00:11.875204 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875211 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875218 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875225 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875232 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875239 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875246 | orchestrator | 2025-04-10 01:00:11.875253 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-04-10 01:00:11.875260 | orchestrator | Thursday 10 April 2025 00:49:33 +0000 (0:00:00.756) 0:03:16.634 ******** 2025-04-10 01:00:11.875267 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875274 | orchestrator | 2025-04-10 01:00:11.875282 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-04-10 01:00:11.875289 | orchestrator | Thursday 10 April 2025 00:49:34 +0000 (0:00:00.184) 0:03:16.819 ******** 2025-04-10 01:00:11.875296 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875303 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875310 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875317 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875324 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875331 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875338 | orchestrator | 2025-04-10 01:00:11.875345 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-04-10 01:00:11.875352 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:00.977) 0:03:17.796 ******** 2025-04-10 01:00:11.875359 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875366 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875373 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875380 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875387 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875394 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875401 | orchestrator | 2025-04-10 01:00:11.875408 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-04-10 01:00:11.875415 | orchestrator | Thursday 10 April 2025 00:49:35 +0000 (0:00:00.799) 0:03:18.595 ******** 2025-04-10 01:00:11.875422 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875429 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875439 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875446 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875453 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875460 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875467 | orchestrator | 2025-04-10 01:00:11.875474 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-04-10 01:00:11.875484 | orchestrator | Thursday 10 April 2025 00:49:37 +0000 (0:00:01.127) 0:03:19.723 ******** 2025-04-10 01:00:11.875491 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.875498 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.875505 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.875512 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.875519 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.875530 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.875537 | orchestrator | 2025-04-10 01:00:11.875544 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-04-10 01:00:11.875551 | orchestrator | Thursday 10 April 2025 00:49:40 +0000 (0:00:03.472) 0:03:23.195 ******** 2025-04-10 01:00:11.875558 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.875564 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.875571 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.875578 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.875585 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.875592 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.875599 | orchestrator | 2025-04-10 01:00:11.875606 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-04-10 01:00:11.875613 | orchestrator | Thursday 10 April 2025 00:49:41 +0000 (0:00:00.607) 0:03:23.802 ******** 2025-04-10 01:00:11.875620 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.875628 | orchestrator | 2025-04-10 01:00:11.875682 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-04-10 01:00:11.875693 | orchestrator | Thursday 10 April 2025 00:49:42 +0000 (0:00:01.232) 0:03:25.035 ******** 2025-04-10 01:00:11.875700 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875707 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875714 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875721 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875728 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875735 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875742 | orchestrator | 2025-04-10 01:00:11.875749 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-04-10 01:00:11.875756 | orchestrator | Thursday 10 April 2025 00:49:43 +0000 (0:00:00.966) 0:03:26.001 ******** 2025-04-10 01:00:11.875763 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875770 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875777 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875784 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875791 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875798 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875805 | orchestrator | 2025-04-10 01:00:11.875812 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-04-10 01:00:11.875819 | orchestrator | Thursday 10 April 2025 00:49:44 +0000 (0:00:00.688) 0:03:26.690 ******** 2025-04-10 01:00:11.875826 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875833 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875852 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875860 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875867 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875874 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875881 | orchestrator | 2025-04-10 01:00:11.875888 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-04-10 01:00:11.875895 | orchestrator | Thursday 10 April 2025 00:49:44 +0000 (0:00:00.969) 0:03:27.660 ******** 2025-04-10 01:00:11.875902 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875909 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875916 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875923 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875930 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.875937 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.875944 | orchestrator | 2025-04-10 01:00:11.875951 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-04-10 01:00:11.875958 | orchestrator | Thursday 10 April 2025 00:49:45 +0000 (0:00:00.835) 0:03:28.496 ******** 2025-04-10 01:00:11.875965 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.875972 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.875983 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.875990 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.875997 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.876004 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.876011 | orchestrator | 2025-04-10 01:00:11.876018 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-04-10 01:00:11.876025 | orchestrator | Thursday 10 April 2025 00:49:46 +0000 (0:00:00.981) 0:03:29.477 ******** 2025-04-10 01:00:11.876032 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.876039 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.876046 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.876053 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.876060 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.876070 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.876077 | orchestrator | 2025-04-10 01:00:11.876084 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-04-10 01:00:11.876091 | orchestrator | Thursday 10 April 2025 00:49:47 +0000 (0:00:00.728) 0:03:30.206 ******** 2025-04-10 01:00:11.876098 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.876105 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.876112 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.876119 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.876126 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.876133 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.876140 | orchestrator | 2025-04-10 01:00:11.876147 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-04-10 01:00:11.876154 | orchestrator | Thursday 10 April 2025 00:49:48 +0000 (0:00:01.238) 0:03:31.444 ******** 2025-04-10 01:00:11.876161 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.876168 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.876175 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.876182 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.876189 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.876196 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.876203 | orchestrator | 2025-04-10 01:00:11.876210 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.876217 | orchestrator | Thursday 10 April 2025 00:49:50 +0000 (0:00:01.844) 0:03:33.289 ******** 2025-04-10 01:00:11.876225 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.876232 | orchestrator | 2025-04-10 01:00:11.876239 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-04-10 01:00:11.876246 | orchestrator | Thursday 10 April 2025 00:49:52 +0000 (0:00:01.524) 0:03:34.813 ******** 2025-04-10 01:00:11.876253 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-04-10 01:00:11.876260 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-04-10 01:00:11.876267 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-04-10 01:00:11.876274 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-04-10 01:00:11.876281 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-04-10 01:00:11.876288 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-04-10 01:00:11.876295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-04-10 01:00:11.876302 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-04-10 01:00:11.876349 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-04-10 01:00:11.876360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-04-10 01:00:11.876368 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-04-10 01:00:11.876377 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-04-10 01:00:11.876385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-04-10 01:00:11.876397 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-04-10 01:00:11.876406 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-04-10 01:00:11.876414 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-04-10 01:00:11.876422 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-04-10 01:00:11.876431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-04-10 01:00:11.876438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-04-10 01:00:11.876446 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-04-10 01:00:11.876454 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-04-10 01:00:11.876462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-04-10 01:00:11.876470 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-04-10 01:00:11.876478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-04-10 01:00:11.876486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-04-10 01:00:11.876494 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-04-10 01:00:11.876502 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-04-10 01:00:11.876510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-04-10 01:00:11.876518 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-04-10 01:00:11.876526 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-04-10 01:00:11.876534 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-04-10 01:00:11.876542 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-04-10 01:00:11.876550 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-04-10 01:00:11.876557 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-04-10 01:00:11.876565 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-04-10 01:00:11.876573 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-04-10 01:00:11.876584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-10 01:00:11.876592 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-04-10 01:00:11.876600 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-04-10 01:00:11.876608 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-04-10 01:00:11.876616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-10 01:00:11.876624 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-04-10 01:00:11.876633 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-10 01:00:11.876641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-04-10 01:00:11.876649 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-04-10 01:00:11.876656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-10 01:00:11.876664 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-10 01:00:11.876672 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-10 01:00:11.876679 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-10 01:00:11.876686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-10 01:00:11.876693 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-10 01:00:11.876700 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-10 01:00:11.876707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-10 01:00:11.876714 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-10 01:00:11.876720 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-10 01:00:11.876735 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-10 01:00:11.876742 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-10 01:00:11.876749 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-10 01:00:11.876756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-10 01:00:11.876763 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-10 01:00:11.876770 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-10 01:00:11.876778 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-10 01:00:11.876785 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-10 01:00:11.876792 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-10 01:00:11.876799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-10 01:00:11.876806 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-10 01:00:11.876813 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-10 01:00:11.876877 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-10 01:00:11.876889 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-10 01:00:11.876896 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-10 01:00:11.876903 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-10 01:00:11.876910 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-10 01:00:11.876917 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-10 01:00:11.876924 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-04-10 01:00:11.876932 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-10 01:00:11.876939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-10 01:00:11.876946 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-04-10 01:00:11.876953 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-10 01:00:11.876960 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-10 01:00:11.876967 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-04-10 01:00:11.876974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-10 01:00:11.876981 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-04-10 01:00:11.876988 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-04-10 01:00:11.876995 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-04-10 01:00:11.877002 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-04-10 01:00:11.877009 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-04-10 01:00:11.877016 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-04-10 01:00:11.877023 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-04-10 01:00:11.877030 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-04-10 01:00:11.877037 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-04-10 01:00:11.877044 | orchestrator | 2025-04-10 01:00:11.877051 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.877061 | orchestrator | Thursday 10 April 2025 00:49:58 +0000 (0:00:06.150) 0:03:40.964 ******** 2025-04-10 01:00:11.877068 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877076 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877083 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877090 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.877101 | orchestrator | 2025-04-10 01:00:11.877109 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-04-10 01:00:11.877115 | orchestrator | Thursday 10 April 2025 00:49:59 +0000 (0:00:01.329) 0:03:42.294 ******** 2025-04-10 01:00:11.877122 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.877130 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.877137 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.877144 | orchestrator | 2025-04-10 01:00:11.877151 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-04-10 01:00:11.877158 | orchestrator | Thursday 10 April 2025 00:50:00 +0000 (0:00:01.323) 0:03:43.617 ******** 2025-04-10 01:00:11.877165 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.877172 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.877179 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.877186 | orchestrator | 2025-04-10 01:00:11.877193 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.877200 | orchestrator | Thursday 10 April 2025 00:50:02 +0000 (0:00:01.412) 0:03:45.029 ******** 2025-04-10 01:00:11.877207 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877214 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877221 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877228 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.877235 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.877242 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.877250 | orchestrator | 2025-04-10 01:00:11.877257 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.877264 | orchestrator | Thursday 10 April 2025 00:50:03 +0000 (0:00:01.049) 0:03:46.079 ******** 2025-04-10 01:00:11.877270 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877278 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877285 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877292 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.877299 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.877306 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.877313 | orchestrator | 2025-04-10 01:00:11.877320 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.877327 | orchestrator | Thursday 10 April 2025 00:50:04 +0000 (0:00:00.780) 0:03:46.859 ******** 2025-04-10 01:00:11.877334 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877376 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877386 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877393 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.877401 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.877408 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.877415 | orchestrator | 2025-04-10 01:00:11.877422 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.877429 | orchestrator | Thursday 10 April 2025 00:50:05 +0000 (0:00:00.998) 0:03:47.858 ******** 2025-04-10 01:00:11.877436 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877443 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877450 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877457 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.877464 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.877471 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.877482 | orchestrator | 2025-04-10 01:00:11.877490 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.877497 | orchestrator | Thursday 10 April 2025 00:50:05 +0000 (0:00:00.782) 0:03:48.640 ******** 2025-04-10 01:00:11.877504 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877511 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877518 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877525 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.877532 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.877539 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.877547 | orchestrator | 2025-04-10 01:00:11.877554 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.877561 | orchestrator | Thursday 10 April 2025 00:50:06 +0000 (0:00:00.972) 0:03:49.613 ******** 2025-04-10 01:00:11.877568 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877575 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877582 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877590 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.877597 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.877604 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.877616 | orchestrator | 2025-04-10 01:00:11.877623 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.877630 | orchestrator | Thursday 10 April 2025 00:50:07 +0000 (0:00:00.731) 0:03:50.345 ******** 2025-04-10 01:00:11.877638 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877646 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877653 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877660 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.877667 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.877674 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.877681 | orchestrator | 2025-04-10 01:00:11.877688 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.877695 | orchestrator | Thursday 10 April 2025 00:50:08 +0000 (0:00:01.114) 0:03:51.460 ******** 2025-04-10 01:00:11.877703 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877710 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877717 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877724 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.877731 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.877738 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.877745 | orchestrator | 2025-04-10 01:00:11.877752 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.877759 | orchestrator | Thursday 10 April 2025 00:50:09 +0000 (0:00:00.693) 0:03:52.153 ******** 2025-04-10 01:00:11.877766 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877773 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877780 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877787 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.877794 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.877801 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.877808 | orchestrator | 2025-04-10 01:00:11.877815 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.877822 | orchestrator | Thursday 10 April 2025 00:50:12 +0000 (0:00:02.566) 0:03:54.720 ******** 2025-04-10 01:00:11.877829 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877836 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877878 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.877885 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.877892 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.877899 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.877906 | orchestrator | 2025-04-10 01:00:11.877913 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.877927 | orchestrator | Thursday 10 April 2025 00:50:12 +0000 (0:00:00.724) 0:03:55.444 ******** 2025-04-10 01:00:11.877934 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.877942 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.877949 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.877956 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.877966 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.877973 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.877981 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.877998 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.878006 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878032 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.878042 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.878050 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.878058 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.878067 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.878075 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.878083 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.878092 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.878100 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.878108 | orchestrator | 2025-04-10 01:00:11.878115 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.878168 | orchestrator | Thursday 10 April 2025 00:50:13 +0000 (0:00:01.061) 0:03:56.505 ******** 2025-04-10 01:00:11.878178 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-10 01:00:11.878188 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-10 01:00:11.878196 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878203 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-10 01:00:11.878210 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-10 01:00:11.878217 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878224 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-10 01:00:11.878231 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-10 01:00:11.878238 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878245 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-04-10 01:00:11.878252 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-04-10 01:00:11.878259 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-04-10 01:00:11.878266 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-04-10 01:00:11.878273 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-04-10 01:00:11.878280 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-04-10 01:00:11.878287 | orchestrator | 2025-04-10 01:00:11.878294 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.878301 | orchestrator | Thursday 10 April 2025 00:50:14 +0000 (0:00:00.793) 0:03:57.299 ******** 2025-04-10 01:00:11.878309 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878316 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878322 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878329 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.878337 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.878344 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.878351 | orchestrator | 2025-04-10 01:00:11.878358 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.878365 | orchestrator | Thursday 10 April 2025 00:50:15 +0000 (0:00:01.129) 0:03:58.429 ******** 2025-04-10 01:00:11.878373 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878379 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878389 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878396 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.878402 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.878408 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.878414 | orchestrator | 2025-04-10 01:00:11.878420 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.878427 | orchestrator | Thursday 10 April 2025 00:50:16 +0000 (0:00:00.751) 0:03:59.180 ******** 2025-04-10 01:00:11.878433 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878439 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878445 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878451 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.878457 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.878466 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.878472 | orchestrator | 2025-04-10 01:00:11.878479 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.878485 | orchestrator | Thursday 10 April 2025 00:50:17 +0000 (0:00:01.043) 0:04:00.224 ******** 2025-04-10 01:00:11.878491 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878497 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878503 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878509 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.878515 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.878521 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.878528 | orchestrator | 2025-04-10 01:00:11.878536 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.878543 | orchestrator | Thursday 10 April 2025 00:50:18 +0000 (0:00:00.871) 0:04:01.096 ******** 2025-04-10 01:00:11.878549 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878555 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878561 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878568 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.878574 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.878580 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.878586 | orchestrator | 2025-04-10 01:00:11.878592 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.878598 | orchestrator | Thursday 10 April 2025 00:50:19 +0000 (0:00:01.067) 0:04:02.163 ******** 2025-04-10 01:00:11.878605 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878611 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878617 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878623 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.878629 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.878635 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.878642 | orchestrator | 2025-04-10 01:00:11.878648 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.878654 | orchestrator | Thursday 10 April 2025 00:50:20 +0000 (0:00:00.821) 0:04:02.984 ******** 2025-04-10 01:00:11.878660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.878667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.878673 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.878679 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878685 | orchestrator | 2025-04-10 01:00:11.878691 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.878698 | orchestrator | Thursday 10 April 2025 00:50:21 +0000 (0:00:00.755) 0:04:03.740 ******** 2025-04-10 01:00:11.878704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.878710 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.878716 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.878723 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878732 | orchestrator | 2025-04-10 01:00:11.878776 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.878785 | orchestrator | Thursday 10 April 2025 00:50:21 +0000 (0:00:00.421) 0:04:04.162 ******** 2025-04-10 01:00:11.878791 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.878797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.878804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.878810 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878816 | orchestrator | 2025-04-10 01:00:11.878822 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.878828 | orchestrator | Thursday 10 April 2025 00:50:22 +0000 (0:00:00.540) 0:04:04.703 ******** 2025-04-10 01:00:11.878835 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878853 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878860 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878866 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.878872 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.878878 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.878884 | orchestrator | 2025-04-10 01:00:11.878891 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.878897 | orchestrator | Thursday 10 April 2025 00:50:22 +0000 (0:00:00.764) 0:04:05.467 ******** 2025-04-10 01:00:11.878903 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.878909 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878916 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.878922 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878928 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.878934 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878940 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-10 01:00:11.878946 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-10 01:00:11.878953 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-10 01:00:11.878959 | orchestrator | 2025-04-10 01:00:11.878965 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.878971 | orchestrator | Thursday 10 April 2025 00:50:24 +0000 (0:00:01.218) 0:04:06.685 ******** 2025-04-10 01:00:11.878977 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.878983 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.878990 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.878996 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879002 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.879008 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.879014 | orchestrator | 2025-04-10 01:00:11.879021 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.879027 | orchestrator | Thursday 10 April 2025 00:50:24 +0000 (0:00:00.665) 0:04:07.350 ******** 2025-04-10 01:00:11.879033 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.879039 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.879045 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.879052 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879058 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.879064 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.879070 | orchestrator | 2025-04-10 01:00:11.879076 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.879082 | orchestrator | Thursday 10 April 2025 00:50:25 +0000 (0:00:00.994) 0:04:08.345 ******** 2025-04-10 01:00:11.879089 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.879095 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.879101 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.879107 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.879113 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.879120 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.879130 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.879136 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879142 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.879148 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.879158 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.879164 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.879170 | orchestrator | 2025-04-10 01:00:11.879186 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.879193 | orchestrator | Thursday 10 April 2025 00:50:26 +0000 (0:00:00.989) 0:04:09.335 ******** 2025-04-10 01:00:11.879199 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.879205 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.879211 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.879218 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.879224 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879230 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.879237 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.879243 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.879249 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.879256 | orchestrator | 2025-04-10 01:00:11.879262 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.879268 | orchestrator | Thursday 10 April 2025 00:50:27 +0000 (0:00:01.034) 0:04:10.369 ******** 2025-04-10 01:00:11.879274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.879280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.879287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.879293 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.879299 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-10 01:00:11.879340 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-10 01:00:11.879349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-10 01:00:11.879356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-10 01:00:11.879362 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-10 01:00:11.879369 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-10 01:00:11.879376 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.879382 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.879389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.879395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.879402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.879409 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879415 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.879422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.879428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.879435 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.879441 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.879448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.879454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.879461 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.879467 | orchestrator | 2025-04-10 01:00:11.879474 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.879485 | orchestrator | Thursday 10 April 2025 00:50:29 +0000 (0:00:02.235) 0:04:12.604 ******** 2025-04-10 01:00:11.879491 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.879498 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.879505 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.879511 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.879518 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.879524 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.879530 | orchestrator | 2025-04-10 01:00:11.879537 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-10 01:00:11.879544 | orchestrator | Thursday 10 April 2025 00:50:34 +0000 (0:00:05.044) 0:04:17.649 ******** 2025-04-10 01:00:11.879550 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.879556 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.879563 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.879569 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.879576 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.879582 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.879589 | orchestrator | 2025-04-10 01:00:11.879595 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-10 01:00:11.879602 | orchestrator | Thursday 10 April 2025 00:50:36 +0000 (0:00:01.381) 0:04:19.030 ******** 2025-04-10 01:00:11.879608 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879615 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.879621 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.879627 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.879634 | orchestrator | 2025-04-10 01:00:11.879641 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-10 01:00:11.879647 | orchestrator | Thursday 10 April 2025 00:50:37 +0000 (0:00:00.935) 0:04:19.965 ******** 2025-04-10 01:00:11.879654 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.879660 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.879667 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.879673 | orchestrator | 2025-04-10 01:00:11.879683 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-04-10 01:00:11.879690 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.879697 | orchestrator | 2025-04-10 01:00:11.879703 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-10 01:00:11.879710 | orchestrator | Thursday 10 April 2025 00:50:38 +0000 (0:00:01.201) 0:04:21.167 ******** 2025-04-10 01:00:11.879716 | orchestrator | 2025-04-10 01:00:11.879723 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-04-10 01:00:11.879729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.879736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.879742 | orchestrator | 2025-04-10 01:00:11.879749 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-10 01:00:11.879755 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.879762 | orchestrator | 2025-04-10 01:00:11.879768 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-04-10 01:00:11.879775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.879781 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879788 | orchestrator | 2025-04-10 01:00:11.879795 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-10 01:00:11.879801 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.879808 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.879814 | orchestrator | 2025-04-10 01:00:11.879821 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-10 01:00:11.879827 | orchestrator | Thursday 10 April 2025 00:50:39 +0000 (0:00:01.345) 0:04:22.512 ******** 2025-04-10 01:00:11.879837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.879858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.879865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.879871 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.879878 | orchestrator | 2025-04-10 01:00:11.879884 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-10 01:00:11.879890 | orchestrator | Thursday 10 April 2025 00:50:41 +0000 (0:00:01.307) 0:04:23.820 ******** 2025-04-10 01:00:11.879897 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.879937 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.879946 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.879952 | orchestrator | 2025-04-10 01:00:11.879958 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-04-10 01:00:11.879965 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.879971 | orchestrator | 2025-04-10 01:00:11.879977 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-10 01:00:11.879983 | orchestrator | Thursday 10 April 2025 00:50:41 +0000 (0:00:00.617) 0:04:24.438 ******** 2025-04-10 01:00:11.879989 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.879995 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.880002 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.880008 | orchestrator | 2025-04-10 01:00:11.880014 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-04-10 01:00:11.880020 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880026 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.880033 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.880039 | orchestrator | 2025-04-10 01:00:11.880045 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-10 01:00:11.880051 | orchestrator | Thursday 10 April 2025 00:50:42 +0000 (0:00:00.720) 0:04:25.158 ******** 2025-04-10 01:00:11.880058 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.880064 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.880070 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.880076 | orchestrator | 2025-04-10 01:00:11.880082 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-04-10 01:00:11.880089 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880095 | orchestrator | 2025-04-10 01:00:11.880101 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-10 01:00:11.880107 | orchestrator | Thursday 10 April 2025 00:50:43 +0000 (0:00:00.881) 0:04:26.039 ******** 2025-04-10 01:00:11.880113 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.880123 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.880129 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.880135 | orchestrator | 2025-04-10 01:00:11.880142 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-04-10 01:00:11.880148 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880154 | orchestrator | 2025-04-10 01:00:11.880160 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-10 01:00:11.880167 | orchestrator | Thursday 10 April 2025 00:50:44 +0000 (0:00:00.873) 0:04:26.913 ******** 2025-04-10 01:00:11.880173 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880179 | orchestrator | 2025-04-10 01:00:11.880185 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-10 01:00:11.880191 | orchestrator | Thursday 10 April 2025 00:50:44 +0000 (0:00:00.165) 0:04:27.079 ******** 2025-04-10 01:00:11.880198 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.880204 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.880210 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.880216 | orchestrator | 2025-04-10 01:00:11.880222 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-04-10 01:00:11.880228 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880239 | orchestrator | 2025-04-10 01:00:11.880245 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-10 01:00:11.880251 | orchestrator | Thursday 10 April 2025 00:50:45 +0000 (0:00:00.869) 0:04:27.948 ******** 2025-04-10 01:00:11.880258 | orchestrator | 2025-04-10 01:00:11.880264 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-04-10 01:00:11.880270 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.880283 | orchestrator | 2025-04-10 01:00:11.880289 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-10 01:00:11.880295 | orchestrator | Thursday 10 April 2025 00:50:46 +0000 (0:00:00.968) 0:04:28.916 ******** 2025-04-10 01:00:11.880301 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.880307 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.880314 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.880320 | orchestrator | 2025-04-10 01:00:11.880331 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-04-10 01:00:11.880337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.880343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.880349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.880355 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880362 | orchestrator | 2025-04-10 01:00:11.880368 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-10 01:00:11.880374 | orchestrator | Thursday 10 April 2025 00:50:47 +0000 (0:00:01.145) 0:04:30.062 ******** 2025-04-10 01:00:11.880380 | orchestrator | 2025-04-10 01:00:11.880386 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-04-10 01:00:11.880393 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880399 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.880405 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.880411 | orchestrator | 2025-04-10 01:00:11.880417 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-10 01:00:11.880424 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.880430 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.880436 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.880442 | orchestrator | 2025-04-10 01:00:11.880448 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-10 01:00:11.880455 | orchestrator | Thursday 10 April 2025 00:50:48 +0000 (0:00:01.482) 0:04:31.544 ******** 2025-04-10 01:00:11.880461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.880467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.880473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.880479 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.880486 | orchestrator | 2025-04-10 01:00:11.880501 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-10 01:00:11.880540 | orchestrator | Thursday 10 April 2025 00:50:49 +0000 (0:00:00.965) 0:04:32.509 ******** 2025-04-10 01:00:11.880549 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.880555 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.880562 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.880569 | orchestrator | 2025-04-10 01:00:11.880575 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-04-10 01:00:11.880582 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880588 | orchestrator | 2025-04-10 01:00:11.880595 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-10 01:00:11.880601 | orchestrator | Thursday 10 April 2025 00:50:50 +0000 (0:00:01.105) 0:04:33.615 ******** 2025-04-10 01:00:11.880611 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.880621 | orchestrator | 2025-04-10 01:00:11.880628 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-10 01:00:11.880634 | orchestrator | Thursday 10 April 2025 00:50:51 +0000 (0:00:00.795) 0:04:34.411 ******** 2025-04-10 01:00:11.880641 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.880647 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.880654 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.880660 | orchestrator | 2025-04-10 01:00:11.880667 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-04-10 01:00:11.880673 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.880680 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.880686 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.880693 | orchestrator | 2025-04-10 01:00:11.880699 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-10 01:00:11.880705 | orchestrator | Thursday 10 April 2025 00:50:52 +0000 (0:00:00.898) 0:04:35.309 ******** 2025-04-10 01:00:11.880712 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.880718 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.880725 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.880732 | orchestrator | 2025-04-10 01:00:11.880738 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.880745 | orchestrator | Thursday 10 April 2025 00:50:54 +0000 (0:00:01.492) 0:04:36.802 ******** 2025-04-10 01:00:11.880751 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.880758 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.880764 | orchestrator | 2025-04-10 01:00:11.880770 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-10 01:00:11.880777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.880783 | orchestrator | 2025-04-10 01:00:11.880790 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.880796 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.880803 | orchestrator | 2025-04-10 01:00:11.880809 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-10 01:00:11.880816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.880822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.880829 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.880835 | orchestrator | 2025-04-10 01:00:11.880853 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-10 01:00:11.880860 | orchestrator | Thursday 10 April 2025 00:50:55 +0000 (0:00:01.362) 0:04:38.165 ******** 2025-04-10 01:00:11.880866 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.880872 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.880879 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.880885 | orchestrator | 2025-04-10 01:00:11.880891 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-10 01:00:11.880897 | orchestrator | Thursday 10 April 2025 00:50:56 +0000 (0:00:01.077) 0:04:39.242 ******** 2025-04-10 01:00:11.880903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.880910 | orchestrator | 2025-04-10 01:00:11.880916 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-10 01:00:11.880922 | orchestrator | Thursday 10 April 2025 00:50:57 +0000 (0:00:00.619) 0:04:39.861 ******** 2025-04-10 01:00:11.880928 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.880934 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.880940 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.880946 | orchestrator | 2025-04-10 01:00:11.880953 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-10 01:00:11.880959 | orchestrator | Thursday 10 April 2025 00:50:57 +0000 (0:00:00.603) 0:04:40.465 ******** 2025-04-10 01:00:11.880965 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.880971 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.880977 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.880987 | orchestrator | 2025-04-10 01:00:11.880994 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-10 01:00:11.881000 | orchestrator | Thursday 10 April 2025 00:50:59 +0000 (0:00:01.267) 0:04:41.732 ******** 2025-04-10 01:00:11.881006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.881015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.881021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.881028 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.881034 | orchestrator | 2025-04-10 01:00:11.881040 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-10 01:00:11.881046 | orchestrator | Thursday 10 April 2025 00:50:59 +0000 (0:00:00.687) 0:04:42.419 ******** 2025-04-10 01:00:11.881052 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.881059 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.881065 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.881071 | orchestrator | 2025-04-10 01:00:11.881077 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-10 01:00:11.881083 | orchestrator | Thursday 10 April 2025 00:51:00 +0000 (0:00:00.345) 0:04:42.765 ******** 2025-04-10 01:00:11.881089 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.881096 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.881105 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.881112 | orchestrator | 2025-04-10 01:00:11.881118 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-10 01:00:11.881158 | orchestrator | Thursday 10 April 2025 00:51:00 +0000 (0:00:00.640) 0:04:43.405 ******** 2025-04-10 01:00:11.881167 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.881173 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.881179 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.881186 | orchestrator | 2025-04-10 01:00:11.881192 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-10 01:00:11.881198 | orchestrator | Thursday 10 April 2025 00:51:01 +0000 (0:00:00.376) 0:04:43.782 ******** 2025-04-10 01:00:11.881204 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.881211 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.881217 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.881223 | orchestrator | 2025-04-10 01:00:11.881229 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.881235 | orchestrator | Thursday 10 April 2025 00:51:01 +0000 (0:00:00.351) 0:04:44.133 ******** 2025-04-10 01:00:11.881241 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.881247 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.881254 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.881260 | orchestrator | 2025-04-10 01:00:11.881266 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-04-10 01:00:11.881272 | orchestrator | 2025-04-10 01:00:11.881278 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.881285 | orchestrator | Thursday 10 April 2025 00:51:03 +0000 (0:00:02.411) 0:04:46.544 ******** 2025-04-10 01:00:11.881291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.881298 | orchestrator | 2025-04-10 01:00:11.881304 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.881310 | orchestrator | Thursday 10 April 2025 00:51:04 +0000 (0:00:00.598) 0:04:47.143 ******** 2025-04-10 01:00:11.881316 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.881322 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.881329 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.881335 | orchestrator | 2025-04-10 01:00:11.881341 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.881347 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.761) 0:04:47.904 ******** 2025-04-10 01:00:11.881357 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881363 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881370 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881376 | orchestrator | 2025-04-10 01:00:11.881382 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.881388 | orchestrator | Thursday 10 April 2025 00:51:05 +0000 (0:00:00.610) 0:04:48.515 ******** 2025-04-10 01:00:11.881394 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881401 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881407 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881413 | orchestrator | 2025-04-10 01:00:11.881419 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.881425 | orchestrator | Thursday 10 April 2025 00:51:06 +0000 (0:00:00.385) 0:04:48.901 ******** 2025-04-10 01:00:11.881431 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881437 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881444 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881450 | orchestrator | 2025-04-10 01:00:11.881456 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.881462 | orchestrator | Thursday 10 April 2025 00:51:06 +0000 (0:00:00.393) 0:04:49.294 ******** 2025-04-10 01:00:11.881468 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.881475 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.881481 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.881487 | orchestrator | 2025-04-10 01:00:11.881493 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.881499 | orchestrator | Thursday 10 April 2025 00:51:07 +0000 (0:00:00.776) 0:04:50.070 ******** 2025-04-10 01:00:11.881506 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881512 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881518 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881524 | orchestrator | 2025-04-10 01:00:11.881530 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.881536 | orchestrator | Thursday 10 April 2025 00:51:08 +0000 (0:00:00.638) 0:04:50.709 ******** 2025-04-10 01:00:11.881543 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881549 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881555 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881561 | orchestrator | 2025-04-10 01:00:11.881568 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.881574 | orchestrator | Thursday 10 April 2025 00:51:08 +0000 (0:00:00.419) 0:04:51.128 ******** 2025-04-10 01:00:11.881580 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881586 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881593 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881599 | orchestrator | 2025-04-10 01:00:11.881605 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.881611 | orchestrator | Thursday 10 April 2025 00:51:08 +0000 (0:00:00.422) 0:04:51.551 ******** 2025-04-10 01:00:11.881617 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881624 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881630 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881636 | orchestrator | 2025-04-10 01:00:11.881642 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.881651 | orchestrator | Thursday 10 April 2025 00:51:09 +0000 (0:00:00.498) 0:04:52.050 ******** 2025-04-10 01:00:11.881657 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881664 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881670 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881676 | orchestrator | 2025-04-10 01:00:11.881682 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.881688 | orchestrator | Thursday 10 April 2025 00:51:10 +0000 (0:00:00.972) 0:04:53.023 ******** 2025-04-10 01:00:11.881694 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.881704 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.881743 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.881752 | orchestrator | 2025-04-10 01:00:11.881758 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.881764 | orchestrator | Thursday 10 April 2025 00:51:11 +0000 (0:00:00.949) 0:04:53.972 ******** 2025-04-10 01:00:11.881770 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881785 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881792 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881798 | orchestrator | 2025-04-10 01:00:11.881804 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.881811 | orchestrator | Thursday 10 April 2025 00:51:11 +0000 (0:00:00.637) 0:04:54.609 ******** 2025-04-10 01:00:11.881817 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.881823 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.881829 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.881835 | orchestrator | 2025-04-10 01:00:11.881873 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.881880 | orchestrator | Thursday 10 April 2025 00:51:12 +0000 (0:00:00.705) 0:04:55.315 ******** 2025-04-10 01:00:11.881886 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881896 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881902 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881908 | orchestrator | 2025-04-10 01:00:11.881914 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.881920 | orchestrator | Thursday 10 April 2025 00:51:13 +0000 (0:00:01.031) 0:04:56.346 ******** 2025-04-10 01:00:11.881926 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881932 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881939 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881945 | orchestrator | 2025-04-10 01:00:11.881951 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.881957 | orchestrator | Thursday 10 April 2025 00:51:14 +0000 (0:00:00.364) 0:04:56.711 ******** 2025-04-10 01:00:11.881963 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.881969 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.881976 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.881982 | orchestrator | 2025-04-10 01:00:11.881988 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.881994 | orchestrator | Thursday 10 April 2025 00:51:14 +0000 (0:00:00.367) 0:04:57.079 ******** 2025-04-10 01:00:11.882000 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882007 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882013 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882036 | orchestrator | 2025-04-10 01:00:11.882043 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.882049 | orchestrator | Thursday 10 April 2025 00:51:15 +0000 (0:00:00.622) 0:04:57.701 ******** 2025-04-10 01:00:11.882055 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882061 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882067 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882074 | orchestrator | 2025-04-10 01:00:11.882080 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.882086 | orchestrator | Thursday 10 April 2025 00:51:15 +0000 (0:00:00.386) 0:04:58.087 ******** 2025-04-10 01:00:11.882092 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.882098 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.882104 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.882110 | orchestrator | 2025-04-10 01:00:11.882117 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.882123 | orchestrator | Thursday 10 April 2025 00:51:15 +0000 (0:00:00.390) 0:04:58.478 ******** 2025-04-10 01:00:11.882129 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.882135 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.882146 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.882152 | orchestrator | 2025-04-10 01:00:11.882158 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.882164 | orchestrator | Thursday 10 April 2025 00:51:16 +0000 (0:00:00.365) 0:04:58.843 ******** 2025-04-10 01:00:11.882171 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882177 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882183 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882189 | orchestrator | 2025-04-10 01:00:11.882199 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.882205 | orchestrator | Thursday 10 April 2025 00:51:16 +0000 (0:00:00.611) 0:04:59.455 ******** 2025-04-10 01:00:11.882212 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882218 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882224 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882230 | orchestrator | 2025-04-10 01:00:11.882237 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.882243 | orchestrator | Thursday 10 April 2025 00:51:17 +0000 (0:00:00.344) 0:04:59.799 ******** 2025-04-10 01:00:11.882249 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882255 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882261 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882267 | orchestrator | 2025-04-10 01:00:11.882274 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.882280 | orchestrator | Thursday 10 April 2025 00:51:17 +0000 (0:00:00.365) 0:05:00.165 ******** 2025-04-10 01:00:11.882286 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882292 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882298 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882305 | orchestrator | 2025-04-10 01:00:11.882311 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.882317 | orchestrator | Thursday 10 April 2025 00:51:17 +0000 (0:00:00.332) 0:05:00.498 ******** 2025-04-10 01:00:11.882322 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882328 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882334 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882340 | orchestrator | 2025-04-10 01:00:11.882346 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.882354 | orchestrator | Thursday 10 April 2025 00:51:18 +0000 (0:00:00.740) 0:05:01.239 ******** 2025-04-10 01:00:11.882360 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882366 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882411 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882420 | orchestrator | 2025-04-10 01:00:11.882427 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.882433 | orchestrator | Thursday 10 April 2025 00:51:18 +0000 (0:00:00.436) 0:05:01.676 ******** 2025-04-10 01:00:11.882440 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882447 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882453 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882460 | orchestrator | 2025-04-10 01:00:11.882467 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.882474 | orchestrator | Thursday 10 April 2025 00:51:19 +0000 (0:00:00.438) 0:05:02.114 ******** 2025-04-10 01:00:11.882481 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882486 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882492 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882498 | orchestrator | 2025-04-10 01:00:11.882504 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.882510 | orchestrator | Thursday 10 April 2025 00:51:19 +0000 (0:00:00.401) 0:05:02.516 ******** 2025-04-10 01:00:11.882516 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882522 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882533 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882539 | orchestrator | 2025-04-10 01:00:11.882545 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.882551 | orchestrator | Thursday 10 April 2025 00:51:20 +0000 (0:00:00.778) 0:05:03.294 ******** 2025-04-10 01:00:11.882556 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882562 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882568 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882574 | orchestrator | 2025-04-10 01:00:11.882580 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.882586 | orchestrator | Thursday 10 April 2025 00:51:20 +0000 (0:00:00.388) 0:05:03.682 ******** 2025-04-10 01:00:11.882595 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882605 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882611 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882617 | orchestrator | 2025-04-10 01:00:11.882623 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.882629 | orchestrator | Thursday 10 April 2025 00:51:21 +0000 (0:00:00.354) 0:05:04.037 ******** 2025-04-10 01:00:11.882635 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882641 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882647 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882652 | orchestrator | 2025-04-10 01:00:11.882658 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.882664 | orchestrator | Thursday 10 April 2025 00:51:21 +0000 (0:00:00.357) 0:05:04.395 ******** 2025-04-10 01:00:11.882670 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.882676 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.882682 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882688 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.882693 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.882699 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882705 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.882711 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.882717 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882723 | orchestrator | 2025-04-10 01:00:11.882728 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.882734 | orchestrator | Thursday 10 April 2025 00:51:22 +0000 (0:00:00.714) 0:05:05.109 ******** 2025-04-10 01:00:11.882740 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-10 01:00:11.882746 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-10 01:00:11.882752 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882758 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-10 01:00:11.882764 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-10 01:00:11.882770 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882776 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-10 01:00:11.882782 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-10 01:00:11.882788 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882794 | orchestrator | 2025-04-10 01:00:11.882799 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.882805 | orchestrator | Thursday 10 April 2025 00:51:22 +0000 (0:00:00.408) 0:05:05.518 ******** 2025-04-10 01:00:11.882811 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882817 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882823 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882829 | orchestrator | 2025-04-10 01:00:11.882835 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.882852 | orchestrator | Thursday 10 April 2025 00:51:23 +0000 (0:00:00.333) 0:05:05.851 ******** 2025-04-10 01:00:11.882861 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882867 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882873 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882879 | orchestrator | 2025-04-10 01:00:11.882885 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.882891 | orchestrator | Thursday 10 April 2025 00:51:23 +0000 (0:00:00.391) 0:05:06.243 ******** 2025-04-10 01:00:11.882897 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882902 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882908 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882914 | orchestrator | 2025-04-10 01:00:11.882920 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.882926 | orchestrator | Thursday 10 April 2025 00:51:24 +0000 (0:00:00.661) 0:05:06.905 ******** 2025-04-10 01:00:11.882966 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.882974 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.882980 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.882986 | orchestrator | 2025-04-10 01:00:11.882992 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.882998 | orchestrator | Thursday 10 April 2025 00:51:24 +0000 (0:00:00.392) 0:05:07.298 ******** 2025-04-10 01:00:11.883004 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883009 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883015 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883021 | orchestrator | 2025-04-10 01:00:11.883027 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.883033 | orchestrator | Thursday 10 April 2025 00:51:24 +0000 (0:00:00.357) 0:05:07.655 ******** 2025-04-10 01:00:11.883039 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883045 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883051 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883057 | orchestrator | 2025-04-10 01:00:11.883063 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.883069 | orchestrator | Thursday 10 April 2025 00:51:25 +0000 (0:00:00.374) 0:05:08.029 ******** 2025-04-10 01:00:11.883075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.883081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.883086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.883092 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883098 | orchestrator | 2025-04-10 01:00:11.883104 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.883110 | orchestrator | Thursday 10 April 2025 00:51:26 +0000 (0:00:01.021) 0:05:09.051 ******** 2025-04-10 01:00:11.883116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.883122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.883128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.883134 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883149 | orchestrator | 2025-04-10 01:00:11.883155 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.883161 | orchestrator | Thursday 10 April 2025 00:51:26 +0000 (0:00:00.453) 0:05:09.504 ******** 2025-04-10 01:00:11.883168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.883173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.883179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.883185 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883191 | orchestrator | 2025-04-10 01:00:11.883197 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.883206 | orchestrator | Thursday 10 April 2025 00:51:27 +0000 (0:00:00.451) 0:05:09.955 ******** 2025-04-10 01:00:11.883216 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883222 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883228 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883234 | orchestrator | 2025-04-10 01:00:11.883240 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.883246 | orchestrator | Thursday 10 April 2025 00:51:27 +0000 (0:00:00.401) 0:05:10.356 ******** 2025-04-10 01:00:11.883252 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.883258 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883263 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.883269 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883275 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.883281 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883287 | orchestrator | 2025-04-10 01:00:11.883293 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.883299 | orchestrator | Thursday 10 April 2025 00:51:28 +0000 (0:00:00.534) 0:05:10.891 ******** 2025-04-10 01:00:11.883305 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883311 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883317 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883322 | orchestrator | 2025-04-10 01:00:11.883328 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.883334 | orchestrator | Thursday 10 April 2025 00:51:28 +0000 (0:00:00.678) 0:05:11.569 ******** 2025-04-10 01:00:11.883340 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883346 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883352 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883358 | orchestrator | 2025-04-10 01:00:11.883364 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.883370 | orchestrator | Thursday 10 April 2025 00:51:29 +0000 (0:00:00.371) 0:05:11.941 ******** 2025-04-10 01:00:11.883376 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.883382 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883388 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.883394 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883399 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.883405 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883411 | orchestrator | 2025-04-10 01:00:11.883417 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.883423 | orchestrator | Thursday 10 April 2025 00:51:29 +0000 (0:00:00.492) 0:05:12.433 ******** 2025-04-10 01:00:11.883429 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883435 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883440 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883446 | orchestrator | 2025-04-10 01:00:11.883452 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.883458 | orchestrator | Thursday 10 April 2025 00:51:30 +0000 (0:00:00.385) 0:05:12.819 ******** 2025-04-10 01:00:11.883464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.883484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.883491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.883497 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883505 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-10 01:00:11.883511 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-10 01:00:11.883517 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-10 01:00:11.883523 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883529 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-10 01:00:11.883538 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-10 01:00:11.883548 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-10 01:00:11.883553 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883559 | orchestrator | 2025-04-10 01:00:11.883565 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.883571 | orchestrator | Thursday 10 April 2025 00:51:31 +0000 (0:00:00.987) 0:05:13.806 ******** 2025-04-10 01:00:11.883577 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883583 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883589 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883595 | orchestrator | 2025-04-10 01:00:11.883601 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.883608 | orchestrator | Thursday 10 April 2025 00:51:31 +0000 (0:00:00.697) 0:05:14.503 ******** 2025-04-10 01:00:11.883614 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883621 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883628 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883634 | orchestrator | 2025-04-10 01:00:11.883641 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.883647 | orchestrator | Thursday 10 April 2025 00:51:32 +0000 (0:00:00.936) 0:05:15.440 ******** 2025-04-10 01:00:11.883654 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883661 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883667 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883674 | orchestrator | 2025-04-10 01:00:11.883681 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.883687 | orchestrator | Thursday 10 April 2025 00:51:33 +0000 (0:00:00.585) 0:05:16.025 ******** 2025-04-10 01:00:11.883694 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883701 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.883707 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.883714 | orchestrator | 2025-04-10 01:00:11.883720 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-04-10 01:00:11.883727 | orchestrator | Thursday 10 April 2025 00:51:34 +0000 (0:00:00.855) 0:05:16.881 ******** 2025-04-10 01:00:11.883734 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.883741 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.883748 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.883754 | orchestrator | 2025-04-10 01:00:11.883761 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-04-10 01:00:11.883768 | orchestrator | Thursday 10 April 2025 00:51:34 +0000 (0:00:00.399) 0:05:17.280 ******** 2025-04-10 01:00:11.883775 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.883781 | orchestrator | 2025-04-10 01:00:11.883788 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-04-10 01:00:11.883795 | orchestrator | Thursday 10 April 2025 00:51:35 +0000 (0:00:00.897) 0:05:18.177 ******** 2025-04-10 01:00:11.883801 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.883808 | orchestrator | 2025-04-10 01:00:11.883815 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-04-10 01:00:11.883822 | orchestrator | Thursday 10 April 2025 00:51:35 +0000 (0:00:00.199) 0:05:18.376 ******** 2025-04-10 01:00:11.883828 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-10 01:00:11.883835 | orchestrator | 2025-04-10 01:00:11.883854 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-04-10 01:00:11.883861 | orchestrator | Thursday 10 April 2025 00:51:36 +0000 (0:00:00.885) 0:05:19.262 ******** 2025-04-10 01:00:11.883867 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.883874 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.883881 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.883888 | orchestrator | 2025-04-10 01:00:11.883894 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-04-10 01:00:11.883904 | orchestrator | Thursday 10 April 2025 00:51:36 +0000 (0:00:00.390) 0:05:19.652 ******** 2025-04-10 01:00:11.883915 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.883921 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.883928 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.883935 | orchestrator | 2025-04-10 01:00:11.883942 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-04-10 01:00:11.883949 | orchestrator | Thursday 10 April 2025 00:51:37 +0000 (0:00:00.376) 0:05:20.029 ******** 2025-04-10 01:00:11.883956 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.883963 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.883969 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.883976 | orchestrator | 2025-04-10 01:00:11.883982 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-04-10 01:00:11.883988 | orchestrator | Thursday 10 April 2025 00:51:38 +0000 (0:00:01.227) 0:05:21.256 ******** 2025-04-10 01:00:11.883993 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.883999 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884005 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884011 | orchestrator | 2025-04-10 01:00:11.884017 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-04-10 01:00:11.884023 | orchestrator | Thursday 10 April 2025 00:51:39 +0000 (0:00:00.877) 0:05:22.134 ******** 2025-04-10 01:00:11.884029 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884034 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884040 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884046 | orchestrator | 2025-04-10 01:00:11.884052 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-04-10 01:00:11.884073 | orchestrator | Thursday 10 April 2025 00:51:40 +0000 (0:00:00.771) 0:05:22.905 ******** 2025-04-10 01:00:11.884080 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884086 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.884092 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.884098 | orchestrator | 2025-04-10 01:00:11.884104 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-04-10 01:00:11.884110 | orchestrator | Thursday 10 April 2025 00:51:40 +0000 (0:00:00.738) 0:05:23.643 ******** 2025-04-10 01:00:11.884116 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884122 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.884130 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.884136 | orchestrator | 2025-04-10 01:00:11.884142 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-04-10 01:00:11.884148 | orchestrator | Thursday 10 April 2025 00:51:41 +0000 (0:00:00.684) 0:05:24.327 ******** 2025-04-10 01:00:11.884154 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884160 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.884166 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.884172 | orchestrator | 2025-04-10 01:00:11.884178 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-04-10 01:00:11.884184 | orchestrator | Thursday 10 April 2025 00:51:42 +0000 (0:00:00.386) 0:05:24.714 ******** 2025-04-10 01:00:11.884190 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884195 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.884201 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.884207 | orchestrator | 2025-04-10 01:00:11.884213 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-04-10 01:00:11.884219 | orchestrator | Thursday 10 April 2025 00:51:42 +0000 (0:00:00.364) 0:05:25.078 ******** 2025-04-10 01:00:11.884225 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884231 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.884237 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.884242 | orchestrator | 2025-04-10 01:00:11.884248 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-04-10 01:00:11.884254 | orchestrator | Thursday 10 April 2025 00:51:42 +0000 (0:00:00.421) 0:05:25.500 ******** 2025-04-10 01:00:11.884260 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884266 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884275 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884281 | orchestrator | 2025-04-10 01:00:11.884287 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-04-10 01:00:11.884293 | orchestrator | Thursday 10 April 2025 00:51:44 +0000 (0:00:01.631) 0:05:27.132 ******** 2025-04-10 01:00:11.884299 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884308 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.884315 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.884320 | orchestrator | 2025-04-10 01:00:11.884326 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-04-10 01:00:11.884332 | orchestrator | Thursday 10 April 2025 00:51:44 +0000 (0:00:00.386) 0:05:27.519 ******** 2025-04-10 01:00:11.884338 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.884344 | orchestrator | 2025-04-10 01:00:11.884350 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-04-10 01:00:11.884356 | orchestrator | Thursday 10 April 2025 00:51:45 +0000 (0:00:00.991) 0:05:28.510 ******** 2025-04-10 01:00:11.884362 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884368 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.884374 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.884380 | orchestrator | 2025-04-10 01:00:11.884386 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-04-10 01:00:11.884392 | orchestrator | Thursday 10 April 2025 00:51:46 +0000 (0:00:00.367) 0:05:28.877 ******** 2025-04-10 01:00:11.884398 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884404 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.884409 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.884415 | orchestrator | 2025-04-10 01:00:11.884421 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-04-10 01:00:11.884427 | orchestrator | Thursday 10 April 2025 00:51:46 +0000 (0:00:00.389) 0:05:29.267 ******** 2025-04-10 01:00:11.884433 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.884439 | orchestrator | 2025-04-10 01:00:11.884445 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-04-10 01:00:11.884451 | orchestrator | Thursday 10 April 2025 00:51:47 +0000 (0:00:00.874) 0:05:30.141 ******** 2025-04-10 01:00:11.884457 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884462 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884468 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884474 | orchestrator | 2025-04-10 01:00:11.884480 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-04-10 01:00:11.884489 | orchestrator | Thursday 10 April 2025 00:51:48 +0000 (0:00:01.380) 0:05:31.521 ******** 2025-04-10 01:00:11.884495 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884501 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884506 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884512 | orchestrator | 2025-04-10 01:00:11.884518 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-04-10 01:00:11.884524 | orchestrator | Thursday 10 April 2025 00:51:50 +0000 (0:00:01.283) 0:05:32.805 ******** 2025-04-10 01:00:11.884530 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884535 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884541 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884547 | orchestrator | 2025-04-10 01:00:11.884553 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-04-10 01:00:11.884559 | orchestrator | Thursday 10 April 2025 00:51:51 +0000 (0:00:01.839) 0:05:34.645 ******** 2025-04-10 01:00:11.884565 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884571 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884577 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884583 | orchestrator | 2025-04-10 01:00:11.884589 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-04-10 01:00:11.884611 | orchestrator | Thursday 10 April 2025 00:51:54 +0000 (0:00:02.233) 0:05:36.878 ******** 2025-04-10 01:00:11.884619 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.884625 | orchestrator | 2025-04-10 01:00:11.884631 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-04-10 01:00:11.884636 | orchestrator | Thursday 10 April 2025 00:51:54 +0000 (0:00:00.601) 0:05:37.479 ******** 2025-04-10 01:00:11.884642 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-04-10 01:00:11.884648 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884654 | orchestrator | 2025-04-10 01:00:11.884660 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-04-10 01:00:11.884666 | orchestrator | Thursday 10 April 2025 00:52:16 +0000 (0:00:21.528) 0:05:59.008 ******** 2025-04-10 01:00:11.884672 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884678 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.884684 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.884689 | orchestrator | 2025-04-10 01:00:11.884695 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-04-10 01:00:11.884701 | orchestrator | Thursday 10 April 2025 00:52:23 +0000 (0:00:07.325) 0:06:06.333 ******** 2025-04-10 01:00:11.884707 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884713 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.884719 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.884725 | orchestrator | 2025-04-10 01:00:11.884730 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-10 01:00:11.884736 | orchestrator | Thursday 10 April 2025 00:52:24 +0000 (0:00:01.277) 0:06:07.610 ******** 2025-04-10 01:00:11.884742 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884748 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884754 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884760 | orchestrator | 2025-04-10 01:00:11.884766 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-10 01:00:11.884771 | orchestrator | Thursday 10 April 2025 00:52:25 +0000 (0:00:00.763) 0:06:08.374 ******** 2025-04-10 01:00:11.884777 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.884783 | orchestrator | 2025-04-10 01:00:11.884789 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-10 01:00:11.884795 | orchestrator | Thursday 10 April 2025 00:52:26 +0000 (0:00:00.778) 0:06:09.153 ******** 2025-04-10 01:00:11.884801 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884807 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.884813 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.884819 | orchestrator | 2025-04-10 01:00:11.884825 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-10 01:00:11.884830 | orchestrator | Thursday 10 April 2025 00:52:26 +0000 (0:00:00.374) 0:06:09.527 ******** 2025-04-10 01:00:11.884836 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884871 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884877 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884883 | orchestrator | 2025-04-10 01:00:11.884889 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-10 01:00:11.884895 | orchestrator | Thursday 10 April 2025 00:52:28 +0000 (0:00:01.219) 0:06:10.746 ******** 2025-04-10 01:00:11.884901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.884907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.884913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.884918 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.884923 | orchestrator | 2025-04-10 01:00:11.884929 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-10 01:00:11.884938 | orchestrator | Thursday 10 April 2025 00:52:29 +0000 (0:00:01.227) 0:06:11.973 ******** 2025-04-10 01:00:11.884943 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.884949 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.884954 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.884959 | orchestrator | 2025-04-10 01:00:11.884965 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.884970 | orchestrator | Thursday 10 April 2025 00:52:29 +0000 (0:00:00.401) 0:06:12.375 ******** 2025-04-10 01:00:11.884975 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.884981 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.884986 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.884991 | orchestrator | 2025-04-10 01:00:11.884997 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-04-10 01:00:11.885002 | orchestrator | 2025-04-10 01:00:11.885007 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.885013 | orchestrator | Thursday 10 April 2025 00:52:31 +0000 (0:00:02.141) 0:06:14.516 ******** 2025-04-10 01:00:11.885018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.885023 | orchestrator | 2025-04-10 01:00:11.885029 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.885034 | orchestrator | Thursday 10 April 2025 00:52:32 +0000 (0:00:00.769) 0:06:15.285 ******** 2025-04-10 01:00:11.885039 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.885045 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.885050 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.885055 | orchestrator | 2025-04-10 01:00:11.885065 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.885070 | orchestrator | Thursday 10 April 2025 00:52:33 +0000 (0:00:00.748) 0:06:16.034 ******** 2025-04-10 01:00:11.885076 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885081 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885086 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885092 | orchestrator | 2025-04-10 01:00:11.885097 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.885117 | orchestrator | Thursday 10 April 2025 00:52:33 +0000 (0:00:00.333) 0:06:16.368 ******** 2025-04-10 01:00:11.885124 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885132 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885137 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885143 | orchestrator | 2025-04-10 01:00:11.885148 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.885153 | orchestrator | Thursday 10 April 2025 00:52:34 +0000 (0:00:00.662) 0:06:17.031 ******** 2025-04-10 01:00:11.885159 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885164 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885169 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885175 | orchestrator | 2025-04-10 01:00:11.885180 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.885185 | orchestrator | Thursday 10 April 2025 00:52:34 +0000 (0:00:00.361) 0:06:17.392 ******** 2025-04-10 01:00:11.885191 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.885196 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.885201 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.885207 | orchestrator | 2025-04-10 01:00:11.885212 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.885217 | orchestrator | Thursday 10 April 2025 00:52:35 +0000 (0:00:00.823) 0:06:18.216 ******** 2025-04-10 01:00:11.885223 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885228 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885233 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885239 | orchestrator | 2025-04-10 01:00:11.885244 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.885253 | orchestrator | Thursday 10 April 2025 00:52:35 +0000 (0:00:00.358) 0:06:18.575 ******** 2025-04-10 01:00:11.885258 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885263 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885269 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885274 | orchestrator | 2025-04-10 01:00:11.885280 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.885285 | orchestrator | Thursday 10 April 2025 00:52:36 +0000 (0:00:00.641) 0:06:19.217 ******** 2025-04-10 01:00:11.885291 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885296 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885302 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885307 | orchestrator | 2025-04-10 01:00:11.885312 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.885317 | orchestrator | Thursday 10 April 2025 00:52:36 +0000 (0:00:00.399) 0:06:19.616 ******** 2025-04-10 01:00:11.885323 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885328 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885333 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885339 | orchestrator | 2025-04-10 01:00:11.885344 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.885349 | orchestrator | Thursday 10 April 2025 00:52:37 +0000 (0:00:00.372) 0:06:19.989 ******** 2025-04-10 01:00:11.885355 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885360 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885365 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885370 | orchestrator | 2025-04-10 01:00:11.885376 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.885381 | orchestrator | Thursday 10 April 2025 00:52:37 +0000 (0:00:00.379) 0:06:20.368 ******** 2025-04-10 01:00:11.885387 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.885392 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.885398 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.885403 | orchestrator | 2025-04-10 01:00:11.885408 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.885414 | orchestrator | Thursday 10 April 2025 00:52:38 +0000 (0:00:01.119) 0:06:21.488 ******** 2025-04-10 01:00:11.885419 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885424 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885429 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885435 | orchestrator | 2025-04-10 01:00:11.885440 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.885446 | orchestrator | Thursday 10 April 2025 00:52:39 +0000 (0:00:00.366) 0:06:21.854 ******** 2025-04-10 01:00:11.885451 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.885456 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.885462 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.885467 | orchestrator | 2025-04-10 01:00:11.885472 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.885478 | orchestrator | Thursday 10 April 2025 00:52:39 +0000 (0:00:00.383) 0:06:22.237 ******** 2025-04-10 01:00:11.885483 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885488 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885494 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885499 | orchestrator | 2025-04-10 01:00:11.885504 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.885510 | orchestrator | Thursday 10 April 2025 00:52:39 +0000 (0:00:00.368) 0:06:22.606 ******** 2025-04-10 01:00:11.885515 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885520 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885525 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885531 | orchestrator | 2025-04-10 01:00:11.885536 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.885545 | orchestrator | Thursday 10 April 2025 00:52:40 +0000 (0:00:00.723) 0:06:23.329 ******** 2025-04-10 01:00:11.885550 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885556 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885561 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885566 | orchestrator | 2025-04-10 01:00:11.885572 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.885580 | orchestrator | Thursday 10 April 2025 00:52:41 +0000 (0:00:00.389) 0:06:23.719 ******** 2025-04-10 01:00:11.885585 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885590 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885596 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885601 | orchestrator | 2025-04-10 01:00:11.885606 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.885624 | orchestrator | Thursday 10 April 2025 00:52:41 +0000 (0:00:00.334) 0:06:24.054 ******** 2025-04-10 01:00:11.885630 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885636 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885641 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885646 | orchestrator | 2025-04-10 01:00:11.885652 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.885657 | orchestrator | Thursday 10 April 2025 00:52:41 +0000 (0:00:00.351) 0:06:24.405 ******** 2025-04-10 01:00:11.885663 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.885668 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.885673 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.885679 | orchestrator | 2025-04-10 01:00:11.885684 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.885690 | orchestrator | Thursday 10 April 2025 00:52:42 +0000 (0:00:00.731) 0:06:25.137 ******** 2025-04-10 01:00:11.885695 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.885703 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.885708 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.885714 | orchestrator | 2025-04-10 01:00:11.885719 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.885724 | orchestrator | Thursday 10 April 2025 00:52:42 +0000 (0:00:00.406) 0:06:25.543 ******** 2025-04-10 01:00:11.885730 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885735 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885740 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885746 | orchestrator | 2025-04-10 01:00:11.885751 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.885757 | orchestrator | Thursday 10 April 2025 00:52:43 +0000 (0:00:00.343) 0:06:25.886 ******** 2025-04-10 01:00:11.885762 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885767 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885772 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885778 | orchestrator | 2025-04-10 01:00:11.885783 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.885789 | orchestrator | Thursday 10 April 2025 00:52:43 +0000 (0:00:00.668) 0:06:26.555 ******** 2025-04-10 01:00:11.885794 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885799 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885805 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885810 | orchestrator | 2025-04-10 01:00:11.885815 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.885821 | orchestrator | Thursday 10 April 2025 00:52:44 +0000 (0:00:00.377) 0:06:26.932 ******** 2025-04-10 01:00:11.885826 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885832 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885837 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885852 | orchestrator | 2025-04-10 01:00:11.885858 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.885863 | orchestrator | Thursday 10 April 2025 00:52:44 +0000 (0:00:00.371) 0:06:27.304 ******** 2025-04-10 01:00:11.885871 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885877 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885882 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885888 | orchestrator | 2025-04-10 01:00:11.885893 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.885898 | orchestrator | Thursday 10 April 2025 00:52:44 +0000 (0:00:00.353) 0:06:27.657 ******** 2025-04-10 01:00:11.885904 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885909 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885914 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885919 | orchestrator | 2025-04-10 01:00:11.885925 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.885930 | orchestrator | Thursday 10 April 2025 00:52:45 +0000 (0:00:00.643) 0:06:28.300 ******** 2025-04-10 01:00:11.885935 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885941 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885946 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885951 | orchestrator | 2025-04-10 01:00:11.885957 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.885962 | orchestrator | Thursday 10 April 2025 00:52:46 +0000 (0:00:00.392) 0:06:28.693 ******** 2025-04-10 01:00:11.885967 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.885973 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.885978 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.885983 | orchestrator | 2025-04-10 01:00:11.885989 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.885994 | orchestrator | Thursday 10 April 2025 00:52:46 +0000 (0:00:00.363) 0:06:29.057 ******** 2025-04-10 01:00:11.886000 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886005 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886010 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886031 | orchestrator | 2025-04-10 01:00:11.886036 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.886042 | orchestrator | Thursday 10 April 2025 00:52:46 +0000 (0:00:00.362) 0:06:29.420 ******** 2025-04-10 01:00:11.886047 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886052 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886058 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886063 | orchestrator | 2025-04-10 01:00:11.886068 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.886074 | orchestrator | Thursday 10 April 2025 00:52:47 +0000 (0:00:00.672) 0:06:30.092 ******** 2025-04-10 01:00:11.886079 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886085 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886090 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886095 | orchestrator | 2025-04-10 01:00:11.886100 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.886106 | orchestrator | Thursday 10 April 2025 00:52:47 +0000 (0:00:00.383) 0:06:30.476 ******** 2025-04-10 01:00:11.886111 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886117 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886122 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886127 | orchestrator | 2025-04-10 01:00:11.886146 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.886152 | orchestrator | Thursday 10 April 2025 00:52:48 +0000 (0:00:00.384) 0:06:30.861 ******** 2025-04-10 01:00:11.886158 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.886163 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.886169 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886174 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.886179 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.886188 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886194 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.886199 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.886204 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886210 | orchestrator | 2025-04-10 01:00:11.886215 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.886221 | orchestrator | Thursday 10 April 2025 00:52:48 +0000 (0:00:00.621) 0:06:31.482 ******** 2025-04-10 01:00:11.886226 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-10 01:00:11.886231 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-10 01:00:11.886237 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886242 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-10 01:00:11.886247 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-10 01:00:11.886252 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886258 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-10 01:00:11.886263 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-10 01:00:11.886269 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886274 | orchestrator | 2025-04-10 01:00:11.886279 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.886287 | orchestrator | Thursday 10 April 2025 00:52:49 +0000 (0:00:00.878) 0:06:32.360 ******** 2025-04-10 01:00:11.886293 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886298 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886303 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886309 | orchestrator | 2025-04-10 01:00:11.886314 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.886319 | orchestrator | Thursday 10 April 2025 00:52:50 +0000 (0:00:00.367) 0:06:32.728 ******** 2025-04-10 01:00:11.886325 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886330 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886335 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886341 | orchestrator | 2025-04-10 01:00:11.886346 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.886351 | orchestrator | Thursday 10 April 2025 00:52:50 +0000 (0:00:00.442) 0:06:33.171 ******** 2025-04-10 01:00:11.886357 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886365 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886370 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886375 | orchestrator | 2025-04-10 01:00:11.886381 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.886386 | orchestrator | Thursday 10 April 2025 00:52:50 +0000 (0:00:00.462) 0:06:33.633 ******** 2025-04-10 01:00:11.886392 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886397 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886402 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886407 | orchestrator | 2025-04-10 01:00:11.886413 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.886418 | orchestrator | Thursday 10 April 2025 00:52:51 +0000 (0:00:00.651) 0:06:34.284 ******** 2025-04-10 01:00:11.886423 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886429 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886434 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886439 | orchestrator | 2025-04-10 01:00:11.886445 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.886450 | orchestrator | Thursday 10 April 2025 00:52:51 +0000 (0:00:00.361) 0:06:34.645 ******** 2025-04-10 01:00:11.886455 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886461 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886466 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886476 | orchestrator | 2025-04-10 01:00:11.886481 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.886487 | orchestrator | Thursday 10 April 2025 00:52:52 +0000 (0:00:00.366) 0:06:35.012 ******** 2025-04-10 01:00:11.886492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.886497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.886503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.886508 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886513 | orchestrator | 2025-04-10 01:00:11.886519 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.886524 | orchestrator | Thursday 10 April 2025 00:52:52 +0000 (0:00:00.417) 0:06:35.429 ******** 2025-04-10 01:00:11.886529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.886535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.886540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.886545 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886551 | orchestrator | 2025-04-10 01:00:11.886556 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.886561 | orchestrator | Thursday 10 April 2025 00:52:53 +0000 (0:00:00.427) 0:06:35.857 ******** 2025-04-10 01:00:11.886567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.886572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.886577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.886595 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886602 | orchestrator | 2025-04-10 01:00:11.886607 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.886613 | orchestrator | Thursday 10 April 2025 00:52:53 +0000 (0:00:00.741) 0:06:36.598 ******** 2025-04-10 01:00:11.886618 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886623 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886629 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886634 | orchestrator | 2025-04-10 01:00:11.886639 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.886645 | orchestrator | Thursday 10 April 2025 00:52:54 +0000 (0:00:00.666) 0:06:37.265 ******** 2025-04-10 01:00:11.886653 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.886658 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886664 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.886669 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886674 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.886680 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886685 | orchestrator | 2025-04-10 01:00:11.886690 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.886695 | orchestrator | Thursday 10 April 2025 00:52:55 +0000 (0:00:00.549) 0:06:37.815 ******** 2025-04-10 01:00:11.886701 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886706 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886711 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886717 | orchestrator | 2025-04-10 01:00:11.886722 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.886727 | orchestrator | Thursday 10 April 2025 00:52:55 +0000 (0:00:00.376) 0:06:38.191 ******** 2025-04-10 01:00:11.886733 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886738 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886743 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886749 | orchestrator | 2025-04-10 01:00:11.886754 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.886759 | orchestrator | Thursday 10 April 2025 00:52:55 +0000 (0:00:00.381) 0:06:38.572 ******** 2025-04-10 01:00:11.886768 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.886774 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886779 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.886784 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886790 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.886795 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886800 | orchestrator | 2025-04-10 01:00:11.886806 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.886811 | orchestrator | Thursday 10 April 2025 00:52:56 +0000 (0:00:00.832) 0:06:39.405 ******** 2025-04-10 01:00:11.886817 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886822 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886827 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886832 | orchestrator | 2025-04-10 01:00:11.886862 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.886869 | orchestrator | Thursday 10 April 2025 00:52:57 +0000 (0:00:00.393) 0:06:39.799 ******** 2025-04-10 01:00:11.886875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.886881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.886886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.886891 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886897 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-10 01:00:11.886902 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-10 01:00:11.886907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-10 01:00:11.886913 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-10 01:00:11.886923 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-10 01:00:11.886929 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-10 01:00:11.886934 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886939 | orchestrator | 2025-04-10 01:00:11.886945 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.886950 | orchestrator | Thursday 10 April 2025 00:52:58 +0000 (0:00:00.968) 0:06:40.768 ******** 2025-04-10 01:00:11.886955 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886961 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.886966 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.886971 | orchestrator | 2025-04-10 01:00:11.886979 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.886984 | orchestrator | Thursday 10 April 2025 00:52:58 +0000 (0:00:00.594) 0:06:41.362 ******** 2025-04-10 01:00:11.886990 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.886995 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887000 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887006 | orchestrator | 2025-04-10 01:00:11.887011 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.887016 | orchestrator | Thursday 10 April 2025 00:52:59 +0000 (0:00:00.831) 0:06:42.194 ******** 2025-04-10 01:00:11.887021 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887027 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887032 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887037 | orchestrator | 2025-04-10 01:00:11.887043 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.887048 | orchestrator | Thursday 10 April 2025 00:53:00 +0000 (0:00:00.565) 0:06:42.759 ******** 2025-04-10 01:00:11.887053 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887059 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887064 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887069 | orchestrator | 2025-04-10 01:00:11.887074 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-04-10 01:00:11.887098 | orchestrator | Thursday 10 April 2025 00:53:00 +0000 (0:00:00.888) 0:06:43.648 ******** 2025-04-10 01:00:11.887105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:11.887110 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:00:11.887115 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:00:11.887121 | orchestrator | 2025-04-10 01:00:11.887126 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-04-10 01:00:11.887131 | orchestrator | Thursday 10 April 2025 00:53:01 +0000 (0:00:00.753) 0:06:44.402 ******** 2025-04-10 01:00:11.887137 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.887142 | orchestrator | 2025-04-10 01:00:11.887147 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-04-10 01:00:11.887153 | orchestrator | Thursday 10 April 2025 00:53:02 +0000 (0:00:00.604) 0:06:45.006 ******** 2025-04-10 01:00:11.887158 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.887164 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.887174 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.887179 | orchestrator | 2025-04-10 01:00:11.887185 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-04-10 01:00:11.887190 | orchestrator | Thursday 10 April 2025 00:53:03 +0000 (0:00:00.691) 0:06:45.698 ******** 2025-04-10 01:00:11.887195 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887201 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887209 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887214 | orchestrator | 2025-04-10 01:00:11.887220 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-04-10 01:00:11.887225 | orchestrator | Thursday 10 April 2025 00:53:03 +0000 (0:00:00.652) 0:06:46.351 ******** 2025-04-10 01:00:11.887231 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:00:11.887236 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:00:11.887242 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:00:11.887247 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-04-10 01:00:11.887253 | orchestrator | 2025-04-10 01:00:11.887258 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-04-10 01:00:11.887263 | orchestrator | Thursday 10 April 2025 00:53:11 +0000 (0:00:08.190) 0:06:54.541 ******** 2025-04-10 01:00:11.887269 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.887274 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.887279 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.887285 | orchestrator | 2025-04-10 01:00:11.887290 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-04-10 01:00:11.887295 | orchestrator | Thursday 10 April 2025 00:53:12 +0000 (0:00:00.603) 0:06:55.145 ******** 2025-04-10 01:00:11.887301 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-10 01:00:11.887306 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-10 01:00:11.887311 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-10 01:00:11.887317 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-10 01:00:11.887322 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:00:11.887327 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:00:11.887333 | orchestrator | 2025-04-10 01:00:11.887338 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-04-10 01:00:11.887343 | orchestrator | Thursday 10 April 2025 00:53:14 +0000 (0:00:01.857) 0:06:57.002 ******** 2025-04-10 01:00:11.887348 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-10 01:00:11.887354 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-10 01:00:11.887359 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-10 01:00:11.887364 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:00:11.887373 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-10 01:00:11.887379 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-10 01:00:11.887384 | orchestrator | 2025-04-10 01:00:11.887389 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-04-10 01:00:11.887394 | orchestrator | Thursday 10 April 2025 00:53:15 +0000 (0:00:01.258) 0:06:58.261 ******** 2025-04-10 01:00:11.887400 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.887405 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.887410 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.887415 | orchestrator | 2025-04-10 01:00:11.887420 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-04-10 01:00:11.887425 | orchestrator | Thursday 10 April 2025 00:53:16 +0000 (0:00:00.935) 0:06:59.197 ******** 2025-04-10 01:00:11.887430 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887435 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887439 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887444 | orchestrator | 2025-04-10 01:00:11.887449 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-04-10 01:00:11.887454 | orchestrator | Thursday 10 April 2025 00:53:16 +0000 (0:00:00.375) 0:06:59.572 ******** 2025-04-10 01:00:11.887459 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887463 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887468 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887473 | orchestrator | 2025-04-10 01:00:11.887480 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-04-10 01:00:11.887485 | orchestrator | Thursday 10 April 2025 00:53:17 +0000 (0:00:00.338) 0:06:59.910 ******** 2025-04-10 01:00:11.887490 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.887495 | orchestrator | 2025-04-10 01:00:11.887500 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-04-10 01:00:11.887504 | orchestrator | Thursday 10 April 2025 00:53:18 +0000 (0:00:00.849) 0:07:00.760 ******** 2025-04-10 01:00:11.887509 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887514 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887530 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887536 | orchestrator | 2025-04-10 01:00:11.887541 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-04-10 01:00:11.887546 | orchestrator | Thursday 10 April 2025 00:53:18 +0000 (0:00:00.507) 0:07:01.267 ******** 2025-04-10 01:00:11.887551 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887555 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887560 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.887565 | orchestrator | 2025-04-10 01:00:11.887570 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-04-10 01:00:11.887575 | orchestrator | Thursday 10 April 2025 00:53:19 +0000 (0:00:00.429) 0:07:01.697 ******** 2025-04-10 01:00:11.887580 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.887584 | orchestrator | 2025-04-10 01:00:11.887589 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-04-10 01:00:11.887594 | orchestrator | Thursday 10 April 2025 00:53:19 +0000 (0:00:00.928) 0:07:02.625 ******** 2025-04-10 01:00:11.887599 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.887604 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.887609 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.887613 | orchestrator | 2025-04-10 01:00:11.887618 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-04-10 01:00:11.887623 | orchestrator | Thursday 10 April 2025 00:53:21 +0000 (0:00:01.362) 0:07:03.987 ******** 2025-04-10 01:00:11.887628 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.887633 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.887637 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.887645 | orchestrator | 2025-04-10 01:00:11.887650 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-04-10 01:00:11.887655 | orchestrator | Thursday 10 April 2025 00:53:22 +0000 (0:00:01.186) 0:07:05.174 ******** 2025-04-10 01:00:11.887660 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.887665 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.887669 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.887674 | orchestrator | 2025-04-10 01:00:11.887679 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-04-10 01:00:11.887684 | orchestrator | Thursday 10 April 2025 00:53:24 +0000 (0:00:01.990) 0:07:07.165 ******** 2025-04-10 01:00:11.887689 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.887694 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.887698 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.887703 | orchestrator | 2025-04-10 01:00:11.887708 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-04-10 01:00:11.887713 | orchestrator | Thursday 10 April 2025 00:53:26 +0000 (0:00:02.082) 0:07:09.248 ******** 2025-04-10 01:00:11.887718 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.887722 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.887727 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-04-10 01:00:11.887732 | orchestrator | 2025-04-10 01:00:11.887737 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-04-10 01:00:11.887742 | orchestrator | Thursday 10 April 2025 00:53:27 +0000 (0:00:00.665) 0:07:09.913 ******** 2025-04-10 01:00:11.887747 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-04-10 01:00:11.887751 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-04-10 01:00:11.887756 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.887761 | orchestrator | 2025-04-10 01:00:11.887766 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-04-10 01:00:11.887771 | orchestrator | Thursday 10 April 2025 00:53:41 +0000 (0:00:13.893) 0:07:23.807 ******** 2025-04-10 01:00:11.887776 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.887780 | orchestrator | 2025-04-10 01:00:11.887785 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-04-10 01:00:11.887790 | orchestrator | Thursday 10 April 2025 00:53:42 +0000 (0:00:01.679) 0:07:25.486 ******** 2025-04-10 01:00:11.887795 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.887800 | orchestrator | 2025-04-10 01:00:11.887805 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-04-10 01:00:11.887810 | orchestrator | Thursday 10 April 2025 00:53:43 +0000 (0:00:00.496) 0:07:25.982 ******** 2025-04-10 01:00:11.887814 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.887819 | orchestrator | 2025-04-10 01:00:11.887824 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-04-10 01:00:11.887829 | orchestrator | Thursday 10 April 2025 00:53:43 +0000 (0:00:00.314) 0:07:26.296 ******** 2025-04-10 01:00:11.887834 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-04-10 01:00:11.887848 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-04-10 01:00:11.887856 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-04-10 01:00:11.887861 | orchestrator | 2025-04-10 01:00:11.887866 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-04-10 01:00:11.887871 | orchestrator | Thursday 10 April 2025 00:53:50 +0000 (0:00:06.595) 0:07:32.892 ******** 2025-04-10 01:00:11.887876 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-04-10 01:00:11.887880 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-04-10 01:00:11.887888 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-04-10 01:00:11.887893 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-04-10 01:00:11.887898 | orchestrator | 2025-04-10 01:00:11.887903 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-10 01:00:11.887919 | orchestrator | Thursday 10 April 2025 00:53:56 +0000 (0:00:05.790) 0:07:38.683 ******** 2025-04-10 01:00:11.887925 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.887930 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.887935 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.887940 | orchestrator | 2025-04-10 01:00:11.887945 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-10 01:00:11.887950 | orchestrator | Thursday 10 April 2025 00:53:57 +0000 (0:00:01.006) 0:07:39.689 ******** 2025-04-10 01:00:11.887954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:11.887959 | orchestrator | 2025-04-10 01:00:11.887964 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-10 01:00:11.887969 | orchestrator | Thursday 10 April 2025 00:53:57 +0000 (0:00:00.685) 0:07:40.375 ******** 2025-04-10 01:00:11.887974 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.887979 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.887984 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.887989 | orchestrator | 2025-04-10 01:00:11.887993 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-10 01:00:11.887998 | orchestrator | Thursday 10 April 2025 00:53:58 +0000 (0:00:00.379) 0:07:40.755 ******** 2025-04-10 01:00:11.888003 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.888008 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.888013 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.888017 | orchestrator | 2025-04-10 01:00:11.888022 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-10 01:00:11.888027 | orchestrator | Thursday 10 April 2025 00:53:59 +0000 (0:00:01.261) 0:07:42.016 ******** 2025-04-10 01:00:11.888032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:00:11.888037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:00:11.888042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:00:11.888046 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.888051 | orchestrator | 2025-04-10 01:00:11.888056 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-10 01:00:11.888061 | orchestrator | Thursday 10 April 2025 00:54:00 +0000 (0:00:00.744) 0:07:42.761 ******** 2025-04-10 01:00:11.888066 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.888070 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.888075 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.888080 | orchestrator | 2025-04-10 01:00:11.888085 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.888090 | orchestrator | Thursday 10 April 2025 00:54:00 +0000 (0:00:00.409) 0:07:43.170 ******** 2025-04-10 01:00:11.888095 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.888102 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.888107 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.888111 | orchestrator | 2025-04-10 01:00:11.888116 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-04-10 01:00:11.888121 | orchestrator | 2025-04-10 01:00:11.888126 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.888131 | orchestrator | Thursday 10 April 2025 00:54:02 +0000 (0:00:02.173) 0:07:45.343 ******** 2025-04-10 01:00:11.888136 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.888140 | orchestrator | 2025-04-10 01:00:11.888145 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.888153 | orchestrator | Thursday 10 April 2025 00:54:03 +0000 (0:00:00.814) 0:07:46.158 ******** 2025-04-10 01:00:11.888158 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888162 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888167 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888172 | orchestrator | 2025-04-10 01:00:11.888177 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.888182 | orchestrator | Thursday 10 April 2025 00:54:03 +0000 (0:00:00.346) 0:07:46.505 ******** 2025-04-10 01:00:11.888187 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888191 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888196 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888201 | orchestrator | 2025-04-10 01:00:11.888206 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.888211 | orchestrator | Thursday 10 April 2025 00:54:04 +0000 (0:00:01.009) 0:07:47.514 ******** 2025-04-10 01:00:11.888215 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888220 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888225 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888230 | orchestrator | 2025-04-10 01:00:11.888235 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.888239 | orchestrator | Thursday 10 April 2025 00:54:05 +0000 (0:00:00.806) 0:07:48.321 ******** 2025-04-10 01:00:11.888244 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888249 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888254 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888259 | orchestrator | 2025-04-10 01:00:11.888263 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.888268 | orchestrator | Thursday 10 April 2025 00:54:06 +0000 (0:00:00.787) 0:07:49.109 ******** 2025-04-10 01:00:11.888273 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888278 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888283 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888288 | orchestrator | 2025-04-10 01:00:11.888295 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.888300 | orchestrator | Thursday 10 April 2025 00:54:06 +0000 (0:00:00.366) 0:07:49.475 ******** 2025-04-10 01:00:11.888305 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888310 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888314 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888319 | orchestrator | 2025-04-10 01:00:11.888324 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.888329 | orchestrator | Thursday 10 April 2025 00:54:07 +0000 (0:00:00.686) 0:07:50.161 ******** 2025-04-10 01:00:11.888334 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888349 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888355 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888360 | orchestrator | 2025-04-10 01:00:11.888365 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.888370 | orchestrator | Thursday 10 April 2025 00:54:07 +0000 (0:00:00.372) 0:07:50.534 ******** 2025-04-10 01:00:11.888374 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888379 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888384 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888389 | orchestrator | 2025-04-10 01:00:11.888394 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.888398 | orchestrator | Thursday 10 April 2025 00:54:08 +0000 (0:00:00.345) 0:07:50.879 ******** 2025-04-10 01:00:11.888403 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888408 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888413 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888418 | orchestrator | 2025-04-10 01:00:11.888423 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.888427 | orchestrator | Thursday 10 April 2025 00:54:08 +0000 (0:00:00.350) 0:07:51.230 ******** 2025-04-10 01:00:11.888435 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888440 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888445 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888450 | orchestrator | 2025-04-10 01:00:11.888455 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.888460 | orchestrator | Thursday 10 April 2025 00:54:09 +0000 (0:00:00.628) 0:07:51.859 ******** 2025-04-10 01:00:11.888464 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888469 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888474 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888479 | orchestrator | 2025-04-10 01:00:11.888484 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.888489 | orchestrator | Thursday 10 April 2025 00:54:10 +0000 (0:00:00.875) 0:07:52.736 ******** 2025-04-10 01:00:11.888493 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888498 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888503 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888508 | orchestrator | 2025-04-10 01:00:11.888513 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.888518 | orchestrator | Thursday 10 April 2025 00:54:10 +0000 (0:00:00.373) 0:07:53.110 ******** 2025-04-10 01:00:11.888522 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888527 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888532 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888537 | orchestrator | 2025-04-10 01:00:11.888542 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.888547 | orchestrator | Thursday 10 April 2025 00:54:10 +0000 (0:00:00.355) 0:07:53.466 ******** 2025-04-10 01:00:11.888551 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888556 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888561 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888566 | orchestrator | 2025-04-10 01:00:11.888570 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.888575 | orchestrator | Thursday 10 April 2025 00:54:11 +0000 (0:00:00.618) 0:07:54.084 ******** 2025-04-10 01:00:11.888580 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888585 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888590 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888595 | orchestrator | 2025-04-10 01:00:11.888599 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.888604 | orchestrator | Thursday 10 April 2025 00:54:11 +0000 (0:00:00.402) 0:07:54.487 ******** 2025-04-10 01:00:11.888609 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888614 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888619 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888624 | orchestrator | 2025-04-10 01:00:11.888628 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.888633 | orchestrator | Thursday 10 April 2025 00:54:12 +0000 (0:00:00.399) 0:07:54.886 ******** 2025-04-10 01:00:11.888638 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888646 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888650 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888655 | orchestrator | 2025-04-10 01:00:11.888660 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.888665 | orchestrator | Thursday 10 April 2025 00:54:12 +0000 (0:00:00.349) 0:07:55.236 ******** 2025-04-10 01:00:11.888670 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888675 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888679 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888684 | orchestrator | 2025-04-10 01:00:11.888689 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.888694 | orchestrator | Thursday 10 April 2025 00:54:13 +0000 (0:00:00.613) 0:07:55.850 ******** 2025-04-10 01:00:11.888699 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888704 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888711 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888716 | orchestrator | 2025-04-10 01:00:11.888721 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.888726 | orchestrator | Thursday 10 April 2025 00:54:13 +0000 (0:00:00.337) 0:07:56.187 ******** 2025-04-10 01:00:11.888731 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.888736 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.888741 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.888745 | orchestrator | 2025-04-10 01:00:11.888750 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.888755 | orchestrator | Thursday 10 April 2025 00:54:13 +0000 (0:00:00.369) 0:07:56.557 ******** 2025-04-10 01:00:11.888760 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888765 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888770 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888774 | orchestrator | 2025-04-10 01:00:11.888781 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.888786 | orchestrator | Thursday 10 April 2025 00:54:14 +0000 (0:00:00.375) 0:07:56.933 ******** 2025-04-10 01:00:11.888791 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888796 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888813 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888818 | orchestrator | 2025-04-10 01:00:11.888823 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.888828 | orchestrator | Thursday 10 April 2025 00:54:14 +0000 (0:00:00.668) 0:07:57.601 ******** 2025-04-10 01:00:11.888833 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888838 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888851 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888856 | orchestrator | 2025-04-10 01:00:11.888861 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.888866 | orchestrator | Thursday 10 April 2025 00:54:15 +0000 (0:00:00.357) 0:07:57.959 ******** 2025-04-10 01:00:11.888870 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888875 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888880 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888885 | orchestrator | 2025-04-10 01:00:11.888890 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.888895 | orchestrator | Thursday 10 April 2025 00:54:15 +0000 (0:00:00.366) 0:07:58.326 ******** 2025-04-10 01:00:11.888899 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888904 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888909 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888914 | orchestrator | 2025-04-10 01:00:11.888919 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.888924 | orchestrator | Thursday 10 April 2025 00:54:16 +0000 (0:00:00.365) 0:07:58.691 ******** 2025-04-10 01:00:11.888928 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888933 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888938 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888943 | orchestrator | 2025-04-10 01:00:11.888947 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.888952 | orchestrator | Thursday 10 April 2025 00:54:16 +0000 (0:00:00.614) 0:07:59.306 ******** 2025-04-10 01:00:11.888957 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888962 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888967 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.888972 | orchestrator | 2025-04-10 01:00:11.888976 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.888981 | orchestrator | Thursday 10 April 2025 00:54:17 +0000 (0:00:00.389) 0:07:59.696 ******** 2025-04-10 01:00:11.888986 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.888991 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.888999 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889004 | orchestrator | 2025-04-10 01:00:11.889009 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.889014 | orchestrator | Thursday 10 April 2025 00:54:17 +0000 (0:00:00.329) 0:08:00.025 ******** 2025-04-10 01:00:11.889018 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889023 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889028 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889033 | orchestrator | 2025-04-10 01:00:11.889038 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.889042 | orchestrator | Thursday 10 April 2025 00:54:17 +0000 (0:00:00.348) 0:08:00.373 ******** 2025-04-10 01:00:11.889047 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889052 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889057 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889062 | orchestrator | 2025-04-10 01:00:11.889067 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.889071 | orchestrator | Thursday 10 April 2025 00:54:18 +0000 (0:00:00.659) 0:08:01.033 ******** 2025-04-10 01:00:11.889076 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889081 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889086 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889091 | orchestrator | 2025-04-10 01:00:11.889095 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.889100 | orchestrator | Thursday 10 April 2025 00:54:18 +0000 (0:00:00.398) 0:08:01.432 ******** 2025-04-10 01:00:11.889105 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889110 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889115 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889119 | orchestrator | 2025-04-10 01:00:11.889124 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.889129 | orchestrator | Thursday 10 April 2025 00:54:19 +0000 (0:00:00.430) 0:08:01.863 ******** 2025-04-10 01:00:11.889134 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.889139 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.889144 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889149 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.889153 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.889158 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889163 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.889168 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.889173 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889178 | orchestrator | 2025-04-10 01:00:11.889182 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.889187 | orchestrator | Thursday 10 April 2025 00:54:19 +0000 (0:00:00.424) 0:08:02.287 ******** 2025-04-10 01:00:11.889192 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-10 01:00:11.889200 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-10 01:00:11.889205 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889213 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-10 01:00:11.889218 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-10 01:00:11.889222 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889227 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-10 01:00:11.889244 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-10 01:00:11.889249 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889254 | orchestrator | 2025-04-10 01:00:11.889259 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.889264 | orchestrator | Thursday 10 April 2025 00:54:20 +0000 (0:00:00.701) 0:08:02.988 ******** 2025-04-10 01:00:11.889272 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889277 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889282 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889287 | orchestrator | 2025-04-10 01:00:11.889291 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.889296 | orchestrator | Thursday 10 April 2025 00:54:20 +0000 (0:00:00.402) 0:08:03.391 ******** 2025-04-10 01:00:11.889301 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889306 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889311 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889316 | orchestrator | 2025-04-10 01:00:11.889321 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.889326 | orchestrator | Thursday 10 April 2025 00:54:21 +0000 (0:00:00.426) 0:08:03.817 ******** 2025-04-10 01:00:11.889330 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889335 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889340 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889345 | orchestrator | 2025-04-10 01:00:11.889350 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.889355 | orchestrator | Thursday 10 April 2025 00:54:21 +0000 (0:00:00.351) 0:08:04.168 ******** 2025-04-10 01:00:11.889359 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889364 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889369 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889374 | orchestrator | 2025-04-10 01:00:11.889379 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.889383 | orchestrator | Thursday 10 April 2025 00:54:22 +0000 (0:00:00.641) 0:08:04.810 ******** 2025-04-10 01:00:11.889388 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889393 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889398 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889403 | orchestrator | 2025-04-10 01:00:11.889408 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.889415 | orchestrator | Thursday 10 April 2025 00:54:22 +0000 (0:00:00.388) 0:08:05.198 ******** 2025-04-10 01:00:11.889420 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889424 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889429 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889434 | orchestrator | 2025-04-10 01:00:11.889439 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.889444 | orchestrator | Thursday 10 April 2025 00:54:22 +0000 (0:00:00.396) 0:08:05.595 ******** 2025-04-10 01:00:11.889449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.889453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.889458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.889463 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889468 | orchestrator | 2025-04-10 01:00:11.889473 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.889477 | orchestrator | Thursday 10 April 2025 00:54:23 +0000 (0:00:00.486) 0:08:06.081 ******** 2025-04-10 01:00:11.889482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.889487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.889492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.889497 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889501 | orchestrator | 2025-04-10 01:00:11.889506 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.889511 | orchestrator | Thursday 10 April 2025 00:54:23 +0000 (0:00:00.460) 0:08:06.542 ******** 2025-04-10 01:00:11.889516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.889525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.889530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.889535 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889540 | orchestrator | 2025-04-10 01:00:11.889544 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.889549 | orchestrator | Thursday 10 April 2025 00:54:24 +0000 (0:00:00.822) 0:08:07.364 ******** 2025-04-10 01:00:11.889554 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889559 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889564 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889569 | orchestrator | 2025-04-10 01:00:11.889573 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.889578 | orchestrator | Thursday 10 April 2025 00:54:25 +0000 (0:00:00.686) 0:08:08.050 ******** 2025-04-10 01:00:11.889583 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.889588 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889593 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.889598 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889603 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.889607 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889612 | orchestrator | 2025-04-10 01:00:11.889617 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.889622 | orchestrator | Thursday 10 April 2025 00:54:25 +0000 (0:00:00.591) 0:08:08.642 ******** 2025-04-10 01:00:11.889627 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889632 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889636 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889641 | orchestrator | 2025-04-10 01:00:11.889646 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.889662 | orchestrator | Thursday 10 April 2025 00:54:26 +0000 (0:00:00.353) 0:08:08.996 ******** 2025-04-10 01:00:11.889667 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889672 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889677 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889682 | orchestrator | 2025-04-10 01:00:11.889687 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.889692 | orchestrator | Thursday 10 April 2025 00:54:26 +0000 (0:00:00.358) 0:08:09.354 ******** 2025-04-10 01:00:11.889696 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.889701 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889706 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.889711 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889716 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.889721 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889725 | orchestrator | 2025-04-10 01:00:11.889730 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.889735 | orchestrator | Thursday 10 April 2025 00:54:27 +0000 (0:00:00.835) 0:08:10.190 ******** 2025-04-10 01:00:11.889740 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.889745 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889750 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.889755 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889760 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.889765 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889769 | orchestrator | 2025-04-10 01:00:11.889774 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.889783 | orchestrator | Thursday 10 April 2025 00:54:27 +0000 (0:00:00.406) 0:08:10.596 ******** 2025-04-10 01:00:11.889788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.889793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.889798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.889802 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.889807 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.889817 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.889822 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889827 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.889831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.889836 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.889863 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889869 | orchestrator | 2025-04-10 01:00:11.889874 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.889878 | orchestrator | Thursday 10 April 2025 00:54:28 +0000 (0:00:00.658) 0:08:11.255 ******** 2025-04-10 01:00:11.889883 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889888 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889893 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889898 | orchestrator | 2025-04-10 01:00:11.889903 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.889908 | orchestrator | Thursday 10 April 2025 00:54:29 +0000 (0:00:00.987) 0:08:12.243 ******** 2025-04-10 01:00:11.889912 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.889917 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889922 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.889927 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889932 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.889936 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889941 | orchestrator | 2025-04-10 01:00:11.889946 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.889951 | orchestrator | Thursday 10 April 2025 00:54:30 +0000 (0:00:00.579) 0:08:12.822 ******** 2025-04-10 01:00:11.889956 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889961 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.889969 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.889977 | orchestrator | 2025-04-10 01:00:11.889982 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.889987 | orchestrator | Thursday 10 April 2025 00:54:31 +0000 (0:00:00.937) 0:08:13.760 ******** 2025-04-10 01:00:11.889992 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.889997 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890001 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890006 | orchestrator | 2025-04-10 01:00:11.890011 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-04-10 01:00:11.890031 | orchestrator | Thursday 10 April 2025 00:54:31 +0000 (0:00:00.575) 0:08:14.335 ******** 2025-04-10 01:00:11.890036 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.890041 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.890046 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.890051 | orchestrator | 2025-04-10 01:00:11.890058 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-04-10 01:00:11.890063 | orchestrator | Thursday 10 April 2025 00:54:32 +0000 (0:00:00.624) 0:08:14.959 ******** 2025-04-10 01:00:11.890068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-10 01:00:11.890073 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:00:11.890081 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:00:11.890087 | orchestrator | 2025-04-10 01:00:11.890104 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-04-10 01:00:11.890111 | orchestrator | Thursday 10 April 2025 00:54:32 +0000 (0:00:00.706) 0:08:15.666 ******** 2025-04-10 01:00:11.890116 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.890121 | orchestrator | 2025-04-10 01:00:11.890125 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-04-10 01:00:11.890130 | orchestrator | Thursday 10 April 2025 00:54:33 +0000 (0:00:00.578) 0:08:16.245 ******** 2025-04-10 01:00:11.890135 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890140 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890145 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890150 | orchestrator | 2025-04-10 01:00:11.890154 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-04-10 01:00:11.890159 | orchestrator | Thursday 10 April 2025 00:54:34 +0000 (0:00:00.599) 0:08:16.844 ******** 2025-04-10 01:00:11.890164 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890169 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890174 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890179 | orchestrator | 2025-04-10 01:00:11.890183 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-04-10 01:00:11.890188 | orchestrator | Thursday 10 April 2025 00:54:34 +0000 (0:00:00.366) 0:08:17.211 ******** 2025-04-10 01:00:11.890193 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890198 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890203 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890207 | orchestrator | 2025-04-10 01:00:11.890212 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-04-10 01:00:11.890217 | orchestrator | Thursday 10 April 2025 00:54:34 +0000 (0:00:00.333) 0:08:17.544 ******** 2025-04-10 01:00:11.890222 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890227 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890231 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890236 | orchestrator | 2025-04-10 01:00:11.890241 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-04-10 01:00:11.890246 | orchestrator | Thursday 10 April 2025 00:54:35 +0000 (0:00:00.406) 0:08:17.951 ******** 2025-04-10 01:00:11.890251 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.890256 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.890261 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.890265 | orchestrator | 2025-04-10 01:00:11.890270 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-04-10 01:00:11.890275 | orchestrator | Thursday 10 April 2025 00:54:36 +0000 (0:00:01.155) 0:08:19.106 ******** 2025-04-10 01:00:11.890280 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.890285 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.890290 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.890294 | orchestrator | 2025-04-10 01:00:11.890299 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-04-10 01:00:11.890304 | orchestrator | Thursday 10 April 2025 00:54:36 +0000 (0:00:00.530) 0:08:19.636 ******** 2025-04-10 01:00:11.890309 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-10 01:00:11.890316 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-10 01:00:11.890321 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-10 01:00:11.890326 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-10 01:00:11.890331 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-10 01:00:11.890339 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-10 01:00:11.890344 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-10 01:00:11.890349 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-10 01:00:11.890354 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-10 01:00:11.890359 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-10 01:00:11.890363 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-10 01:00:11.890368 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-10 01:00:11.890373 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-10 01:00:11.890378 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-10 01:00:11.890383 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-10 01:00:11.890387 | orchestrator | 2025-04-10 01:00:11.890392 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-04-10 01:00:11.890397 | orchestrator | Thursday 10 April 2025 00:54:39 +0000 (0:00:02.409) 0:08:22.046 ******** 2025-04-10 01:00:11.890402 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890407 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890412 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890417 | orchestrator | 2025-04-10 01:00:11.890424 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-04-10 01:00:11.890429 | orchestrator | Thursday 10 April 2025 00:54:39 +0000 (0:00:00.487) 0:08:22.534 ******** 2025-04-10 01:00:11.890434 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.890439 | orchestrator | 2025-04-10 01:00:11.890443 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-04-10 01:00:11.890459 | orchestrator | Thursday 10 April 2025 00:54:40 +0000 (0:00:00.815) 0:08:23.349 ******** 2025-04-10 01:00:11.890465 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-10 01:00:11.890470 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-10 01:00:11.890475 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-10 01:00:11.890480 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-04-10 01:00:11.890485 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-04-10 01:00:11.890490 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-04-10 01:00:11.890495 | orchestrator | 2025-04-10 01:00:11.890499 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-04-10 01:00:11.890504 | orchestrator | Thursday 10 April 2025 00:54:41 +0000 (0:00:01.148) 0:08:24.497 ******** 2025-04-10 01:00:11.890509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:00:11.890514 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.890519 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-10 01:00:11.890524 | orchestrator | 2025-04-10 01:00:11.890528 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-04-10 01:00:11.890533 | orchestrator | Thursday 10 April 2025 00:54:43 +0000 (0:00:01.898) 0:08:26.395 ******** 2025-04-10 01:00:11.890538 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-10 01:00:11.890543 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.890548 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.890555 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-10 01:00:11.890560 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.890568 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.890573 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-10 01:00:11.890578 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.890582 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.890587 | orchestrator | 2025-04-10 01:00:11.890592 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-04-10 01:00:11.890597 | orchestrator | Thursday 10 April 2025 00:54:45 +0000 (0:00:01.744) 0:08:28.140 ******** 2025-04-10 01:00:11.890602 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.890607 | orchestrator | 2025-04-10 01:00:11.890611 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-04-10 01:00:11.890616 | orchestrator | Thursday 10 April 2025 00:54:47 +0000 (0:00:02.475) 0:08:30.615 ******** 2025-04-10 01:00:11.890621 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.890626 | orchestrator | 2025-04-10 01:00:11.890631 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-04-10 01:00:11.890636 | orchestrator | Thursday 10 April 2025 00:54:48 +0000 (0:00:00.851) 0:08:31.466 ******** 2025-04-10 01:00:11.890640 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890645 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890650 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890655 | orchestrator | 2025-04-10 01:00:11.890660 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-04-10 01:00:11.890665 | orchestrator | Thursday 10 April 2025 00:54:49 +0000 (0:00:00.461) 0:08:31.928 ******** 2025-04-10 01:00:11.890670 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890674 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890679 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890684 | orchestrator | 2025-04-10 01:00:11.890689 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-04-10 01:00:11.890694 | orchestrator | Thursday 10 April 2025 00:54:49 +0000 (0:00:00.352) 0:08:32.281 ******** 2025-04-10 01:00:11.890699 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890704 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890709 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890713 | orchestrator | 2025-04-10 01:00:11.890718 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-04-10 01:00:11.890723 | orchestrator | Thursday 10 April 2025 00:54:49 +0000 (0:00:00.325) 0:08:32.607 ******** 2025-04-10 01:00:11.890728 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.890733 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.890738 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.890743 | orchestrator | 2025-04-10 01:00:11.890748 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-04-10 01:00:11.890752 | orchestrator | Thursday 10 April 2025 00:54:50 +0000 (0:00:00.640) 0:08:33.247 ******** 2025-04-10 01:00:11.890757 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.890762 | orchestrator | 2025-04-10 01:00:11.890767 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-04-10 01:00:11.890772 | orchestrator | Thursday 10 April 2025 00:54:51 +0000 (0:00:00.696) 0:08:33.944 ******** 2025-04-10 01:00:11.890777 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7af0ad6a-7281-507c-97d1-7760f3587d37', 'data_vg': 'ceph-7af0ad6a-7281-507c-97d1-7760f3587d37'}) 2025-04-10 01:00:11.890782 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-47ce51ce-522f-5092-939d-97f529b04c78', 'data_vg': 'ceph-47ce51ce-522f-5092-939d-97f529b04c78'}) 2025-04-10 01:00:11.890787 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e6570ad4-669c-53e9-93b8-24292f6b58fb', 'data_vg': 'ceph-e6570ad4-669c-53e9-93b8-24292f6b58fb'}) 2025-04-10 01:00:11.890809 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52286b97-e205-54c6-a29d-cc3afdc4b583', 'data_vg': 'ceph-52286b97-e205-54c6-a29d-cc3afdc4b583'}) 2025-04-10 01:00:11.890815 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-543b72d2-41b4-5023-b438-6662cb79109c', 'data_vg': 'ceph-543b72d2-41b4-5023-b438-6662cb79109c'}) 2025-04-10 01:00:11.890820 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1024c186-728b-5ddc-b380-e3967fe3a792', 'data_vg': 'ceph-1024c186-728b-5ddc-b380-e3967fe3a792'}) 2025-04-10 01:00:11.890825 | orchestrator | 2025-04-10 01:00:11.890830 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-04-10 01:00:11.890835 | orchestrator | Thursday 10 April 2025 00:55:29 +0000 (0:00:38.639) 0:09:12.584 ******** 2025-04-10 01:00:11.890851 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.890856 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.890861 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.890866 | orchestrator | 2025-04-10 01:00:11.890871 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-04-10 01:00:11.890875 | orchestrator | Thursday 10 April 2025 00:55:30 +0000 (0:00:00.502) 0:09:13.086 ******** 2025-04-10 01:00:11.890880 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.890885 | orchestrator | 2025-04-10 01:00:11.890890 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-04-10 01:00:11.890895 | orchestrator | Thursday 10 April 2025 00:55:30 +0000 (0:00:00.568) 0:09:13.655 ******** 2025-04-10 01:00:11.890900 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.890905 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.890910 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.890914 | orchestrator | 2025-04-10 01:00:11.890919 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-04-10 01:00:11.890924 | orchestrator | Thursday 10 April 2025 00:55:31 +0000 (0:00:00.690) 0:09:14.346 ******** 2025-04-10 01:00:11.890929 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.890937 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.890942 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.890947 | orchestrator | 2025-04-10 01:00:11.890952 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-04-10 01:00:11.890957 | orchestrator | Thursday 10 April 2025 00:55:33 +0000 (0:00:02.052) 0:09:16.399 ******** 2025-04-10 01:00:11.890961 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.890966 | orchestrator | 2025-04-10 01:00:11.890971 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-04-10 01:00:11.890978 | orchestrator | Thursday 10 April 2025 00:55:34 +0000 (0:00:00.562) 0:09:16.962 ******** 2025-04-10 01:00:11.890983 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.890988 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.890993 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.890998 | orchestrator | 2025-04-10 01:00:11.891003 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-04-10 01:00:11.891007 | orchestrator | Thursday 10 April 2025 00:55:35 +0000 (0:00:01.457) 0:09:18.419 ******** 2025-04-10 01:00:11.891012 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.891017 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.891022 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.891027 | orchestrator | 2025-04-10 01:00:11.891032 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-04-10 01:00:11.891036 | orchestrator | Thursday 10 April 2025 00:55:36 +0000 (0:00:01.166) 0:09:19.586 ******** 2025-04-10 01:00:11.891041 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.891046 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.891051 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.891059 | orchestrator | 2025-04-10 01:00:11.891064 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-04-10 01:00:11.891069 | orchestrator | Thursday 10 April 2025 00:55:38 +0000 (0:00:01.689) 0:09:21.275 ******** 2025-04-10 01:00:11.891073 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891078 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891083 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891088 | orchestrator | 2025-04-10 01:00:11.891093 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-04-10 01:00:11.891097 | orchestrator | Thursday 10 April 2025 00:55:39 +0000 (0:00:00.408) 0:09:21.683 ******** 2025-04-10 01:00:11.891102 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891107 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891112 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891117 | orchestrator | 2025-04-10 01:00:11.891122 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-04-10 01:00:11.891127 | orchestrator | Thursday 10 April 2025 00:55:39 +0000 (0:00:00.686) 0:09:22.370 ******** 2025-04-10 01:00:11.891132 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-10 01:00:11.891137 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-04-10 01:00:11.891142 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-04-10 01:00:11.891147 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-04-10 01:00:11.891152 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-04-10 01:00:11.891156 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-04-10 01:00:11.891161 | orchestrator | 2025-04-10 01:00:11.891166 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-04-10 01:00:11.891171 | orchestrator | Thursday 10 April 2025 00:55:40 +0000 (0:00:01.060) 0:09:23.431 ******** 2025-04-10 01:00:11.891176 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-04-10 01:00:11.891181 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-04-10 01:00:11.891186 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-04-10 01:00:11.891190 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-04-10 01:00:11.891195 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-04-10 01:00:11.891212 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-04-10 01:00:11.891218 | orchestrator | 2025-04-10 01:00:11.891223 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-04-10 01:00:11.891228 | orchestrator | Thursday 10 April 2025 00:55:44 +0000 (0:00:03.406) 0:09:26.837 ******** 2025-04-10 01:00:11.891232 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891237 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891242 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.891247 | orchestrator | 2025-04-10 01:00:11.891252 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-04-10 01:00:11.891257 | orchestrator | Thursday 10 April 2025 00:55:47 +0000 (0:00:02.921) 0:09:29.759 ******** 2025-04-10 01:00:11.891261 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891266 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891271 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-04-10 01:00:11.891276 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.891281 | orchestrator | 2025-04-10 01:00:11.891286 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-04-10 01:00:11.891291 | orchestrator | Thursday 10 April 2025 00:55:59 +0000 (0:00:12.635) 0:09:42.395 ******** 2025-04-10 01:00:11.891295 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891300 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891305 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891310 | orchestrator | 2025-04-10 01:00:11.891315 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-04-10 01:00:11.891320 | orchestrator | Thursday 10 April 2025 00:56:00 +0000 (0:00:00.571) 0:09:42.966 ******** 2025-04-10 01:00:11.891328 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891333 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891338 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891343 | orchestrator | 2025-04-10 01:00:11.891348 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-10 01:00:11.891352 | orchestrator | Thursday 10 April 2025 00:56:01 +0000 (0:00:01.285) 0:09:44.251 ******** 2025-04-10 01:00:11.891357 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.891362 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.891367 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.891372 | orchestrator | 2025-04-10 01:00:11.891377 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-10 01:00:11.891382 | orchestrator | Thursday 10 April 2025 00:56:02 +0000 (0:00:00.791) 0:09:45.043 ******** 2025-04-10 01:00:11.891387 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.891391 | orchestrator | 2025-04-10 01:00:11.891396 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-10 01:00:11.891401 | orchestrator | Thursday 10 April 2025 00:56:03 +0000 (0:00:00.895) 0:09:45.939 ******** 2025-04-10 01:00:11.891406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.891411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.891416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.891421 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891425 | orchestrator | 2025-04-10 01:00:11.891430 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-10 01:00:11.891435 | orchestrator | Thursday 10 April 2025 00:56:03 +0000 (0:00:00.410) 0:09:46.350 ******** 2025-04-10 01:00:11.891440 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891445 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891450 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891455 | orchestrator | 2025-04-10 01:00:11.891459 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-10 01:00:11.891464 | orchestrator | Thursday 10 April 2025 00:56:04 +0000 (0:00:00.352) 0:09:46.702 ******** 2025-04-10 01:00:11.891469 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891474 | orchestrator | 2025-04-10 01:00:11.891479 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-10 01:00:11.891486 | orchestrator | Thursday 10 April 2025 00:56:04 +0000 (0:00:00.242) 0:09:46.945 ******** 2025-04-10 01:00:11.891491 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891496 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891501 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891506 | orchestrator | 2025-04-10 01:00:11.891511 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-10 01:00:11.891515 | orchestrator | Thursday 10 April 2025 00:56:04 +0000 (0:00:00.642) 0:09:47.587 ******** 2025-04-10 01:00:11.891520 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891525 | orchestrator | 2025-04-10 01:00:11.891530 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-10 01:00:11.891535 | orchestrator | Thursday 10 April 2025 00:56:05 +0000 (0:00:00.282) 0:09:47.870 ******** 2025-04-10 01:00:11.891540 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891545 | orchestrator | 2025-04-10 01:00:11.891550 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-10 01:00:11.891555 | orchestrator | Thursday 10 April 2025 00:56:05 +0000 (0:00:00.261) 0:09:48.132 ******** 2025-04-10 01:00:11.891562 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891567 | orchestrator | 2025-04-10 01:00:11.891572 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-10 01:00:11.891577 | orchestrator | Thursday 10 April 2025 00:56:05 +0000 (0:00:00.177) 0:09:48.309 ******** 2025-04-10 01:00:11.891585 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891590 | orchestrator | 2025-04-10 01:00:11.891595 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-10 01:00:11.891599 | orchestrator | Thursday 10 April 2025 00:56:05 +0000 (0:00:00.307) 0:09:48.616 ******** 2025-04-10 01:00:11.891604 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891609 | orchestrator | 2025-04-10 01:00:11.891614 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-10 01:00:11.891629 | orchestrator | Thursday 10 April 2025 00:56:06 +0000 (0:00:00.253) 0:09:48.869 ******** 2025-04-10 01:00:11.891635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.891640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.891645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.891650 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891655 | orchestrator | 2025-04-10 01:00:11.891660 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-10 01:00:11.891665 | orchestrator | Thursday 10 April 2025 00:56:06 +0000 (0:00:00.436) 0:09:49.306 ******** 2025-04-10 01:00:11.891670 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891677 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891682 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891687 | orchestrator | 2025-04-10 01:00:11.891692 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-10 01:00:11.891697 | orchestrator | Thursday 10 April 2025 00:56:06 +0000 (0:00:00.375) 0:09:49.682 ******** 2025-04-10 01:00:11.891701 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891706 | orchestrator | 2025-04-10 01:00:11.891711 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-10 01:00:11.891716 | orchestrator | Thursday 10 April 2025 00:56:07 +0000 (0:00:00.754) 0:09:50.437 ******** 2025-04-10 01:00:11.891721 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891726 | orchestrator | 2025-04-10 01:00:11.891731 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.891735 | orchestrator | Thursday 10 April 2025 00:56:07 +0000 (0:00:00.230) 0:09:50.667 ******** 2025-04-10 01:00:11.891740 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.891745 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.891750 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.891755 | orchestrator | 2025-04-10 01:00:11.891760 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-04-10 01:00:11.891765 | orchestrator | 2025-04-10 01:00:11.891770 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.891774 | orchestrator | Thursday 10 April 2025 00:56:11 +0000 (0:00:03.156) 0:09:53.823 ******** 2025-04-10 01:00:11.891779 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.891785 | orchestrator | 2025-04-10 01:00:11.891789 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.891794 | orchestrator | Thursday 10 April 2025 00:56:12 +0000 (0:00:01.354) 0:09:55.178 ******** 2025-04-10 01:00:11.891799 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.891804 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.891809 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.891814 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.891819 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.891824 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.891828 | orchestrator | 2025-04-10 01:00:11.891833 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.891860 | orchestrator | Thursday 10 April 2025 00:56:13 +0000 (0:00:00.721) 0:09:55.899 ******** 2025-04-10 01:00:11.891866 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.891871 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.891879 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.891884 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.891889 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.891894 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.891898 | orchestrator | 2025-04-10 01:00:11.891903 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.891908 | orchestrator | Thursday 10 April 2025 00:56:14 +0000 (0:00:01.328) 0:09:57.228 ******** 2025-04-10 01:00:11.891913 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.891918 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.891922 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.891927 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.891932 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.891937 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.891942 | orchestrator | 2025-04-10 01:00:11.891946 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.891954 | orchestrator | Thursday 10 April 2025 00:56:15 +0000 (0:00:01.312) 0:09:58.540 ******** 2025-04-10 01:00:11.891958 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.891963 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.891968 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.891973 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.891978 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.891982 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.891987 | orchestrator | 2025-04-10 01:00:11.891992 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.891997 | orchestrator | Thursday 10 April 2025 00:56:16 +0000 (0:00:01.017) 0:09:59.557 ******** 2025-04-10 01:00:11.892002 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892007 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.892012 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892017 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892022 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.892026 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.892031 | orchestrator | 2025-04-10 01:00:11.892036 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.892041 | orchestrator | Thursday 10 April 2025 00:56:17 +0000 (0:00:01.017) 0:10:00.575 ******** 2025-04-10 01:00:11.892046 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892051 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892055 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892060 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892065 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892070 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892075 | orchestrator | 2025-04-10 01:00:11.892080 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.892097 | orchestrator | Thursday 10 April 2025 00:56:18 +0000 (0:00:00.674) 0:10:01.249 ******** 2025-04-10 01:00:11.892103 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892108 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892113 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892118 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892123 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892128 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892133 | orchestrator | 2025-04-10 01:00:11.892138 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.892143 | orchestrator | Thursday 10 April 2025 00:56:19 +0000 (0:00:00.976) 0:10:02.225 ******** 2025-04-10 01:00:11.892148 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892153 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892160 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892165 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892170 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892175 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892183 | orchestrator | 2025-04-10 01:00:11.892188 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.892193 | orchestrator | Thursday 10 April 2025 00:56:20 +0000 (0:00:00.722) 0:10:02.948 ******** 2025-04-10 01:00:11.892197 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892203 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892207 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892212 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892217 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892222 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892227 | orchestrator | 2025-04-10 01:00:11.892232 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.892237 | orchestrator | Thursday 10 April 2025 00:56:21 +0000 (0:00:00.921) 0:10:03.870 ******** 2025-04-10 01:00:11.892242 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892247 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892251 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892256 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892261 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892266 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892271 | orchestrator | 2025-04-10 01:00:11.892276 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.892281 | orchestrator | Thursday 10 April 2025 00:56:21 +0000 (0:00:00.717) 0:10:04.587 ******** 2025-04-10 01:00:11.892286 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.892290 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.892295 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.892300 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.892305 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.892309 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.892314 | orchestrator | 2025-04-10 01:00:11.892319 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.892324 | orchestrator | Thursday 10 April 2025 00:56:23 +0000 (0:00:01.330) 0:10:05.918 ******** 2025-04-10 01:00:11.892329 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892334 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892339 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892344 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892349 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892353 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892358 | orchestrator | 2025-04-10 01:00:11.892363 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.892368 | orchestrator | Thursday 10 April 2025 00:56:23 +0000 (0:00:00.750) 0:10:06.668 ******** 2025-04-10 01:00:11.892373 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.892378 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.892383 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.892387 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892392 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892397 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892402 | orchestrator | 2025-04-10 01:00:11.892407 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.892412 | orchestrator | Thursday 10 April 2025 00:56:25 +0000 (0:00:01.109) 0:10:07.777 ******** 2025-04-10 01:00:11.892417 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892421 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892426 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892431 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.892436 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.892441 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.892446 | orchestrator | 2025-04-10 01:00:11.892451 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.892456 | orchestrator | Thursday 10 April 2025 00:56:25 +0000 (0:00:00.809) 0:10:08.586 ******** 2025-04-10 01:00:11.892463 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892468 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892473 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892478 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.892483 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.892488 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.892492 | orchestrator | 2025-04-10 01:00:11.892497 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.892502 | orchestrator | Thursday 10 April 2025 00:56:26 +0000 (0:00:00.987) 0:10:09.574 ******** 2025-04-10 01:00:11.892507 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892512 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892517 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892522 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.892526 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.892534 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.892561 | orchestrator | 2025-04-10 01:00:11.892566 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.892571 | orchestrator | Thursday 10 April 2025 00:56:27 +0000 (0:00:00.715) 0:10:10.289 ******** 2025-04-10 01:00:11.892576 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892582 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892587 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892591 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892596 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892601 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892606 | orchestrator | 2025-04-10 01:00:11.892623 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.892629 | orchestrator | Thursday 10 April 2025 00:56:28 +0000 (0:00:00.896) 0:10:11.186 ******** 2025-04-10 01:00:11.892634 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892639 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892644 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892649 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892654 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892659 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892663 | orchestrator | 2025-04-10 01:00:11.892668 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.892673 | orchestrator | Thursday 10 April 2025 00:56:29 +0000 (0:00:00.679) 0:10:11.865 ******** 2025-04-10 01:00:11.892678 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.892683 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.892687 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.892692 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892697 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892702 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892707 | orchestrator | 2025-04-10 01:00:11.892712 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.892721 | orchestrator | Thursday 10 April 2025 00:56:30 +0000 (0:00:01.224) 0:10:13.090 ******** 2025-04-10 01:00:11.892726 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.892731 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.892736 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.892741 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.892745 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.892750 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.892755 | orchestrator | 2025-04-10 01:00:11.892760 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.892765 | orchestrator | Thursday 10 April 2025 00:56:31 +0000 (0:00:00.687) 0:10:13.777 ******** 2025-04-10 01:00:11.892770 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892775 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892779 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892784 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892792 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892797 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892802 | orchestrator | 2025-04-10 01:00:11.892807 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.892812 | orchestrator | Thursday 10 April 2025 00:56:32 +0000 (0:00:00.932) 0:10:14.709 ******** 2025-04-10 01:00:11.892817 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892822 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892827 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892832 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892836 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892851 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892856 | orchestrator | 2025-04-10 01:00:11.892860 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.892865 | orchestrator | Thursday 10 April 2025 00:56:32 +0000 (0:00:00.729) 0:10:15.439 ******** 2025-04-10 01:00:11.892870 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892875 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892880 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892885 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892890 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892895 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892899 | orchestrator | 2025-04-10 01:00:11.892904 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.892909 | orchestrator | Thursday 10 April 2025 00:56:33 +0000 (0:00:01.079) 0:10:16.518 ******** 2025-04-10 01:00:11.892914 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892919 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892924 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892929 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892934 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892938 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892943 | orchestrator | 2025-04-10 01:00:11.892948 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.892953 | orchestrator | Thursday 10 April 2025 00:56:34 +0000 (0:00:00.744) 0:10:17.263 ******** 2025-04-10 01:00:11.892958 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.892963 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.892970 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.892975 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.892980 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.892985 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.892990 | orchestrator | 2025-04-10 01:00:11.892994 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.892999 | orchestrator | Thursday 10 April 2025 00:56:35 +0000 (0:00:00.966) 0:10:18.229 ******** 2025-04-10 01:00:11.893004 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893009 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893014 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893019 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893023 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893028 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893033 | orchestrator | 2025-04-10 01:00:11.893038 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.893043 | orchestrator | Thursday 10 April 2025 00:56:36 +0000 (0:00:00.674) 0:10:18.904 ******** 2025-04-10 01:00:11.893048 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893052 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893057 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893062 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893067 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893072 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893081 | orchestrator | 2025-04-10 01:00:11.893086 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.893091 | orchestrator | Thursday 10 April 2025 00:56:37 +0000 (0:00:00.902) 0:10:19.807 ******** 2025-04-10 01:00:11.893096 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893100 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893105 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893122 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893128 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893133 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893138 | orchestrator | 2025-04-10 01:00:11.893143 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.893148 | orchestrator | Thursday 10 April 2025 00:56:37 +0000 (0:00:00.721) 0:10:20.528 ******** 2025-04-10 01:00:11.893153 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893157 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893162 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893167 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893172 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893177 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893182 | orchestrator | 2025-04-10 01:00:11.893187 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.893192 | orchestrator | Thursday 10 April 2025 00:56:38 +0000 (0:00:00.945) 0:10:21.473 ******** 2025-04-10 01:00:11.893197 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893202 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893206 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893211 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893216 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893221 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893226 | orchestrator | 2025-04-10 01:00:11.893231 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.893236 | orchestrator | Thursday 10 April 2025 00:56:39 +0000 (0:00:00.729) 0:10:22.203 ******** 2025-04-10 01:00:11.893241 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893246 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893250 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893255 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893260 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893265 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893270 | orchestrator | 2025-04-10 01:00:11.893275 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.893280 | orchestrator | Thursday 10 April 2025 00:56:40 +0000 (0:00:00.949) 0:10:23.152 ******** 2025-04-10 01:00:11.893285 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893290 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893295 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893300 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893304 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893309 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893314 | orchestrator | 2025-04-10 01:00:11.893319 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.893324 | orchestrator | Thursday 10 April 2025 00:56:41 +0000 (0:00:00.717) 0:10:23.870 ******** 2025-04-10 01:00:11.893329 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.893334 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-10 01:00:11.893339 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893344 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.893349 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-10 01:00:11.893354 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893359 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.893367 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-10 01:00:11.893372 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.893377 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.893382 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893387 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.893391 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.893396 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893401 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893409 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.893413 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.893418 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893423 | orchestrator | 2025-04-10 01:00:11.893428 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.893433 | orchestrator | Thursday 10 April 2025 00:56:42 +0000 (0:00:01.042) 0:10:24.913 ******** 2025-04-10 01:00:11.893438 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-10 01:00:11.893445 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-10 01:00:11.893450 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893455 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-10 01:00:11.893460 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-10 01:00:11.893465 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893470 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-10 01:00:11.893475 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-10 01:00:11.893480 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893485 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-10 01:00:11.893490 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-10 01:00:11.893495 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893499 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-10 01:00:11.893504 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-10 01:00:11.893509 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893514 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-10 01:00:11.893519 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-10 01:00:11.893524 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893529 | orchestrator | 2025-04-10 01:00:11.893533 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.893549 | orchestrator | Thursday 10 April 2025 00:56:42 +0000 (0:00:00.761) 0:10:25.674 ******** 2025-04-10 01:00:11.893554 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893559 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893564 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893569 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893574 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893579 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893584 | orchestrator | 2025-04-10 01:00:11.893588 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.893593 | orchestrator | Thursday 10 April 2025 00:56:44 +0000 (0:00:01.015) 0:10:26.690 ******** 2025-04-10 01:00:11.893598 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893603 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893608 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893613 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893618 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893623 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893627 | orchestrator | 2025-04-10 01:00:11.893632 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.893641 | orchestrator | Thursday 10 April 2025 00:56:44 +0000 (0:00:00.816) 0:10:27.507 ******** 2025-04-10 01:00:11.893646 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893650 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893655 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893660 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893665 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893670 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893675 | orchestrator | 2025-04-10 01:00:11.893679 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.893684 | orchestrator | Thursday 10 April 2025 00:56:45 +0000 (0:00:01.019) 0:10:28.526 ******** 2025-04-10 01:00:11.893689 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893694 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893699 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893703 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893708 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893713 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893718 | orchestrator | 2025-04-10 01:00:11.893725 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.893730 | orchestrator | Thursday 10 April 2025 00:56:46 +0000 (0:00:00.698) 0:10:29.224 ******** 2025-04-10 01:00:11.893735 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893740 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893745 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893750 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893754 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893759 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893764 | orchestrator | 2025-04-10 01:00:11.893769 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.893774 | orchestrator | Thursday 10 April 2025 00:56:47 +0000 (0:00:00.956) 0:10:30.180 ******** 2025-04-10 01:00:11.893779 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893784 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893788 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893793 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893798 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893803 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893807 | orchestrator | 2025-04-10 01:00:11.893812 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.893817 | orchestrator | Thursday 10 April 2025 00:56:48 +0000 (0:00:00.700) 0:10:30.881 ******** 2025-04-10 01:00:11.893822 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.893827 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.893832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.893836 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893852 | orchestrator | 2025-04-10 01:00:11.893857 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.893862 | orchestrator | Thursday 10 April 2025 00:56:48 +0000 (0:00:00.506) 0:10:31.387 ******** 2025-04-10 01:00:11.893867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.893872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.893877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.893882 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893886 | orchestrator | 2025-04-10 01:00:11.893891 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.893896 | orchestrator | Thursday 10 April 2025 00:56:49 +0000 (0:00:00.741) 0:10:32.128 ******** 2025-04-10 01:00:11.893901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.893906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.893914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.893919 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893924 | orchestrator | 2025-04-10 01:00:11.893929 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.893934 | orchestrator | Thursday 10 April 2025 00:56:50 +0000 (0:00:00.970) 0:10:33.099 ******** 2025-04-10 01:00:11.893938 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.893943 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.893948 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.893953 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.893958 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.893965 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.893970 | orchestrator | 2025-04-10 01:00:11.893975 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.893980 | orchestrator | Thursday 10 April 2025 00:56:51 +0000 (0:00:00.707) 0:10:33.806 ******** 2025-04-10 01:00:11.893985 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.893990 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894007 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.894041 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894048 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.894053 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894058 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.894063 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894068 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.894072 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894077 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.894082 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894087 | orchestrator | 2025-04-10 01:00:11.894092 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.894097 | orchestrator | Thursday 10 April 2025 00:56:52 +0000 (0:00:01.436) 0:10:35.242 ******** 2025-04-10 01:00:11.894102 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894107 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894112 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894116 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894121 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894126 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894131 | orchestrator | 2025-04-10 01:00:11.894136 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.894140 | orchestrator | Thursday 10 April 2025 00:56:53 +0000 (0:00:00.689) 0:10:35.932 ******** 2025-04-10 01:00:11.894145 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894150 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894155 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894160 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894165 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894169 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894174 | orchestrator | 2025-04-10 01:00:11.894179 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.894184 | orchestrator | Thursday 10 April 2025 00:56:54 +0000 (0:00:00.925) 0:10:36.858 ******** 2025-04-10 01:00:11.894189 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-10 01:00:11.894194 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894198 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-10 01:00:11.894203 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-10 01:00:11.894208 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894213 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.894218 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894223 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894232 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.894237 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894242 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.894246 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894251 | orchestrator | 2025-04-10 01:00:11.894256 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.894261 | orchestrator | Thursday 10 April 2025 00:56:55 +0000 (0:00:00.932) 0:10:37.791 ******** 2025-04-10 01:00:11.894266 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894271 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894276 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894281 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.894286 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894291 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.894295 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894300 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.894305 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894310 | orchestrator | 2025-04-10 01:00:11.894315 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.894320 | orchestrator | Thursday 10 April 2025 00:56:56 +0000 (0:00:00.990) 0:10:38.781 ******** 2025-04-10 01:00:11.894325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-10 01:00:11.894330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-10 01:00:11.894334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-10 01:00:11.894339 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894344 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-10 01:00:11.894349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-10 01:00:11.894354 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-10 01:00:11.894359 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-10 01:00:11.894364 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-10 01:00:11.894368 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-10 01:00:11.894373 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.894383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.894388 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.894398 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.894402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.894407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.894412 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894417 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894422 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.894432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.894437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.894442 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894447 | orchestrator | 2025-04-10 01:00:11.894452 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.894457 | orchestrator | Thursday 10 April 2025 00:56:57 +0000 (0:00:01.613) 0:10:40.395 ******** 2025-04-10 01:00:11.894465 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894470 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894475 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894480 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894485 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894489 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894494 | orchestrator | 2025-04-10 01:00:11.894499 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.894504 | orchestrator | Thursday 10 April 2025 00:56:59 +0000 (0:00:01.441) 0:10:41.836 ******** 2025-04-10 01:00:11.894509 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894514 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894518 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894523 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.894528 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894533 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.894538 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894543 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.894548 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894552 | orchestrator | 2025-04-10 01:00:11.894557 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.894562 | orchestrator | Thursday 10 April 2025 00:57:00 +0000 (0:00:01.397) 0:10:43.234 ******** 2025-04-10 01:00:11.894567 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894572 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894576 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894584 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894589 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894594 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894599 | orchestrator | 2025-04-10 01:00:11.894604 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.894609 | orchestrator | Thursday 10 April 2025 00:57:01 +0000 (0:00:01.435) 0:10:44.669 ******** 2025-04-10 01:00:11.894614 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:11.894618 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:11.894623 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:11.894628 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.894633 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.894638 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.894642 | orchestrator | 2025-04-10 01:00:11.894651 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-04-10 01:00:11.894656 | orchestrator | Thursday 10 April 2025 00:57:03 +0000 (0:00:01.401) 0:10:46.071 ******** 2025-04-10 01:00:11.894661 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.894666 | orchestrator | 2025-04-10 01:00:11.894671 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-04-10 01:00:11.894675 | orchestrator | Thursday 10 April 2025 00:57:06 +0000 (0:00:03.471) 0:10:49.542 ******** 2025-04-10 01:00:11.894680 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.894685 | orchestrator | 2025-04-10 01:00:11.894690 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-04-10 01:00:11.894695 | orchestrator | Thursday 10 April 2025 00:57:08 +0000 (0:00:01.852) 0:10:51.395 ******** 2025-04-10 01:00:11.894699 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.894704 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.894709 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.894714 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.894719 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.894724 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.894728 | orchestrator | 2025-04-10 01:00:11.894733 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-04-10 01:00:11.894738 | orchestrator | Thursday 10 April 2025 00:57:11 +0000 (0:00:02.288) 0:10:53.683 ******** 2025-04-10 01:00:11.894746 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.894750 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.894755 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.894760 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.894765 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.894770 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.894774 | orchestrator | 2025-04-10 01:00:11.894779 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-04-10 01:00:11.894784 | orchestrator | Thursday 10 April 2025 00:57:12 +0000 (0:00:01.184) 0:10:54.868 ******** 2025-04-10 01:00:11.894789 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.894794 | orchestrator | 2025-04-10 01:00:11.894799 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-04-10 01:00:11.894804 | orchestrator | Thursday 10 April 2025 00:57:13 +0000 (0:00:01.504) 0:10:56.372 ******** 2025-04-10 01:00:11.894809 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.894814 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.894819 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.894823 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.894828 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.894833 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.894860 | orchestrator | 2025-04-10 01:00:11.894867 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-04-10 01:00:11.894872 | orchestrator | Thursday 10 April 2025 00:57:15 +0000 (0:00:02.131) 0:10:58.504 ******** 2025-04-10 01:00:11.894877 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.894882 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.894886 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.894891 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.894899 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.894905 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.894910 | orchestrator | 2025-04-10 01:00:11.894914 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-04-10 01:00:11.894919 | orchestrator | Thursday 10 April 2025 00:57:21 +0000 (0:00:05.258) 0:11:03.762 ******** 2025-04-10 01:00:11.894924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.894929 | orchestrator | 2025-04-10 01:00:11.894934 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-04-10 01:00:11.894939 | orchestrator | Thursday 10 April 2025 00:57:22 +0000 (0:00:01.583) 0:11:05.346 ******** 2025-04-10 01:00:11.894944 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.894949 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.894954 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.894958 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.894963 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.894968 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.894973 | orchestrator | 2025-04-10 01:00:11.894978 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-04-10 01:00:11.894983 | orchestrator | Thursday 10 April 2025 00:57:23 +0000 (0:00:00.760) 0:11:06.107 ******** 2025-04-10 01:00:11.894987 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:11.894992 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:11.894997 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:11.895002 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.895006 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.895011 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.895016 | orchestrator | 2025-04-10 01:00:11.895021 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-04-10 01:00:11.895026 | orchestrator | Thursday 10 April 2025 00:57:26 +0000 (0:00:02.719) 0:11:08.826 ******** 2025-04-10 01:00:11.895035 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:11.895041 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:11.895045 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:11.895053 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895057 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895062 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895067 | orchestrator | 2025-04-10 01:00:11.895072 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-04-10 01:00:11.895077 | orchestrator | 2025-04-10 01:00:11.895082 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.895087 | orchestrator | Thursday 10 April 2025 00:57:29 +0000 (0:00:02.986) 0:11:11.813 ******** 2025-04-10 01:00:11.895092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.895099 | orchestrator | 2025-04-10 01:00:11.895104 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.895109 | orchestrator | Thursday 10 April 2025 00:57:30 +0000 (0:00:00.923) 0:11:12.737 ******** 2025-04-10 01:00:11.895114 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895118 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895123 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895128 | orchestrator | 2025-04-10 01:00:11.895133 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.895138 | orchestrator | Thursday 10 April 2025 00:57:30 +0000 (0:00:00.397) 0:11:13.134 ******** 2025-04-10 01:00:11.895143 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895148 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895152 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895157 | orchestrator | 2025-04-10 01:00:11.895162 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.895167 | orchestrator | Thursday 10 April 2025 00:57:31 +0000 (0:00:00.746) 0:11:13.881 ******** 2025-04-10 01:00:11.895172 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895177 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895182 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895186 | orchestrator | 2025-04-10 01:00:11.895194 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.895199 | orchestrator | Thursday 10 April 2025 00:57:32 +0000 (0:00:01.032) 0:11:14.913 ******** 2025-04-10 01:00:11.895203 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895208 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895213 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895218 | orchestrator | 2025-04-10 01:00:11.895223 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.895228 | orchestrator | Thursday 10 April 2025 00:57:33 +0000 (0:00:00.799) 0:11:15.712 ******** 2025-04-10 01:00:11.895233 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895238 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895243 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895247 | orchestrator | 2025-04-10 01:00:11.895252 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.895257 | orchestrator | Thursday 10 April 2025 00:57:33 +0000 (0:00:00.530) 0:11:16.242 ******** 2025-04-10 01:00:11.895262 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895266 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895271 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895276 | orchestrator | 2025-04-10 01:00:11.895281 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.895286 | orchestrator | Thursday 10 April 2025 00:57:34 +0000 (0:00:00.633) 0:11:16.876 ******** 2025-04-10 01:00:11.895291 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895296 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895301 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895305 | orchestrator | 2025-04-10 01:00:11.895310 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.895318 | orchestrator | Thursday 10 April 2025 00:57:35 +0000 (0:00:00.908) 0:11:17.784 ******** 2025-04-10 01:00:11.895323 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895327 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895332 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895337 | orchestrator | 2025-04-10 01:00:11.895344 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.895350 | orchestrator | Thursday 10 April 2025 00:57:35 +0000 (0:00:00.544) 0:11:18.328 ******** 2025-04-10 01:00:11.895354 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895359 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895364 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895369 | orchestrator | 2025-04-10 01:00:11.895374 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.895379 | orchestrator | Thursday 10 April 2025 00:57:35 +0000 (0:00:00.349) 0:11:18.678 ******** 2025-04-10 01:00:11.895383 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895388 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895393 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895398 | orchestrator | 2025-04-10 01:00:11.895403 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.895408 | orchestrator | Thursday 10 April 2025 00:57:36 +0000 (0:00:00.436) 0:11:19.114 ******** 2025-04-10 01:00:11.895412 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895417 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895422 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895427 | orchestrator | 2025-04-10 01:00:11.895432 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.895437 | orchestrator | Thursday 10 April 2025 00:57:37 +0000 (0:00:01.147) 0:11:20.262 ******** 2025-04-10 01:00:11.895441 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895446 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895451 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895456 | orchestrator | 2025-04-10 01:00:11.895461 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.895466 | orchestrator | Thursday 10 April 2025 00:57:37 +0000 (0:00:00.400) 0:11:20.663 ******** 2025-04-10 01:00:11.895470 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895475 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895480 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895485 | orchestrator | 2025-04-10 01:00:11.895490 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.895494 | orchestrator | Thursday 10 April 2025 00:57:38 +0000 (0:00:00.383) 0:11:21.046 ******** 2025-04-10 01:00:11.895499 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895504 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895509 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895514 | orchestrator | 2025-04-10 01:00:11.895519 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.895524 | orchestrator | Thursday 10 April 2025 00:57:38 +0000 (0:00:00.360) 0:11:21.407 ******** 2025-04-10 01:00:11.895529 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895534 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895538 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895543 | orchestrator | 2025-04-10 01:00:11.895548 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.895553 | orchestrator | Thursday 10 April 2025 00:57:39 +0000 (0:00:00.920) 0:11:22.327 ******** 2025-04-10 01:00:11.895558 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895562 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895567 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895574 | orchestrator | 2025-04-10 01:00:11.895579 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.895584 | orchestrator | Thursday 10 April 2025 00:57:40 +0000 (0:00:00.570) 0:11:22.898 ******** 2025-04-10 01:00:11.895592 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895597 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895602 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895607 | orchestrator | 2025-04-10 01:00:11.895612 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.895616 | orchestrator | Thursday 10 April 2025 00:57:40 +0000 (0:00:00.312) 0:11:23.211 ******** 2025-04-10 01:00:11.895621 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895626 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895631 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895636 | orchestrator | 2025-04-10 01:00:11.895641 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.895646 | orchestrator | Thursday 10 April 2025 00:57:40 +0000 (0:00:00.294) 0:11:23.506 ******** 2025-04-10 01:00:11.895651 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895656 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895661 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895665 | orchestrator | 2025-04-10 01:00:11.895672 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.895677 | orchestrator | Thursday 10 April 2025 00:57:41 +0000 (0:00:00.491) 0:11:23.997 ******** 2025-04-10 01:00:11.895682 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.895687 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.895692 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.895697 | orchestrator | 2025-04-10 01:00:11.895702 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.895707 | orchestrator | Thursday 10 April 2025 00:57:41 +0000 (0:00:00.302) 0:11:24.300 ******** 2025-04-10 01:00:11.895711 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895716 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895721 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895726 | orchestrator | 2025-04-10 01:00:11.895730 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.895735 | orchestrator | Thursday 10 April 2025 00:57:41 +0000 (0:00:00.293) 0:11:24.594 ******** 2025-04-10 01:00:11.895740 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895745 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895750 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895754 | orchestrator | 2025-04-10 01:00:11.895759 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.895764 | orchestrator | Thursday 10 April 2025 00:57:42 +0000 (0:00:00.277) 0:11:24.871 ******** 2025-04-10 01:00:11.895769 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895774 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895779 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895784 | orchestrator | 2025-04-10 01:00:11.895789 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.895796 | orchestrator | Thursday 10 April 2025 00:57:42 +0000 (0:00:00.508) 0:11:25.379 ******** 2025-04-10 01:00:11.895801 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895805 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895810 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895815 | orchestrator | 2025-04-10 01:00:11.895820 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.895825 | orchestrator | Thursday 10 April 2025 00:57:43 +0000 (0:00:00.355) 0:11:25.735 ******** 2025-04-10 01:00:11.895829 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895834 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895849 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895855 | orchestrator | 2025-04-10 01:00:11.895859 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.895864 | orchestrator | Thursday 10 April 2025 00:57:43 +0000 (0:00:00.440) 0:11:26.175 ******** 2025-04-10 01:00:11.895872 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895877 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895882 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895886 | orchestrator | 2025-04-10 01:00:11.895891 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.895896 | orchestrator | Thursday 10 April 2025 00:57:43 +0000 (0:00:00.351) 0:11:26.527 ******** 2025-04-10 01:00:11.895901 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895906 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895911 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895915 | orchestrator | 2025-04-10 01:00:11.895920 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.895925 | orchestrator | Thursday 10 April 2025 00:57:44 +0000 (0:00:00.676) 0:11:27.204 ******** 2025-04-10 01:00:11.895930 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895935 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895940 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895945 | orchestrator | 2025-04-10 01:00:11.895950 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.895955 | orchestrator | Thursday 10 April 2025 00:57:44 +0000 (0:00:00.351) 0:11:27.556 ******** 2025-04-10 01:00:11.895960 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895965 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.895969 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.895974 | orchestrator | 2025-04-10 01:00:11.895979 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.895984 | orchestrator | Thursday 10 April 2025 00:57:45 +0000 (0:00:00.339) 0:11:27.895 ******** 2025-04-10 01:00:11.895992 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.895997 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896001 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896006 | orchestrator | 2025-04-10 01:00:11.896011 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.896016 | orchestrator | Thursday 10 April 2025 00:57:45 +0000 (0:00:00.373) 0:11:28.269 ******** 2025-04-10 01:00:11.896021 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896026 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896031 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896035 | orchestrator | 2025-04-10 01:00:11.896040 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.896045 | orchestrator | Thursday 10 April 2025 00:57:46 +0000 (0:00:00.575) 0:11:28.845 ******** 2025-04-10 01:00:11.896050 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896055 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896059 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896064 | orchestrator | 2025-04-10 01:00:11.896069 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.896074 | orchestrator | Thursday 10 April 2025 00:57:46 +0000 (0:00:00.291) 0:11:29.136 ******** 2025-04-10 01:00:11.896079 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.896084 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.896089 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896094 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.896099 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.896103 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896108 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.896115 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.896120 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896128 | orchestrator | 2025-04-10 01:00:11.896132 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.896140 | orchestrator | Thursday 10 April 2025 00:57:46 +0000 (0:00:00.327) 0:11:29.463 ******** 2025-04-10 01:00:11.896145 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-10 01:00:11.896150 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-10 01:00:11.896155 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896160 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-10 01:00:11.896164 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-10 01:00:11.896169 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896174 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-10 01:00:11.896179 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-10 01:00:11.896184 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896189 | orchestrator | 2025-04-10 01:00:11.896193 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.896198 | orchestrator | Thursday 10 April 2025 00:57:47 +0000 (0:00:00.334) 0:11:29.798 ******** 2025-04-10 01:00:11.896203 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896208 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896213 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896218 | orchestrator | 2025-04-10 01:00:11.896224 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.896229 | orchestrator | Thursday 10 April 2025 00:57:47 +0000 (0:00:00.628) 0:11:30.427 ******** 2025-04-10 01:00:11.896234 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896239 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896244 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896249 | orchestrator | 2025-04-10 01:00:11.896254 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.896259 | orchestrator | Thursday 10 April 2025 00:57:48 +0000 (0:00:00.351) 0:11:30.778 ******** 2025-04-10 01:00:11.896264 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896268 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896273 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896278 | orchestrator | 2025-04-10 01:00:11.896283 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.896291 | orchestrator | Thursday 10 April 2025 00:57:48 +0000 (0:00:00.370) 0:11:31.149 ******** 2025-04-10 01:00:11.896296 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896301 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896306 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896311 | orchestrator | 2025-04-10 01:00:11.896315 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.896320 | orchestrator | Thursday 10 April 2025 00:57:48 +0000 (0:00:00.314) 0:11:31.463 ******** 2025-04-10 01:00:11.896325 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896330 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896335 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896340 | orchestrator | 2025-04-10 01:00:11.896344 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.896349 | orchestrator | Thursday 10 April 2025 00:57:49 +0000 (0:00:00.725) 0:11:32.188 ******** 2025-04-10 01:00:11.896354 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896359 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896364 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896368 | orchestrator | 2025-04-10 01:00:11.896373 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.896378 | orchestrator | Thursday 10 April 2025 00:57:49 +0000 (0:00:00.412) 0:11:32.601 ******** 2025-04-10 01:00:11.896383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.896388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.896396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.896401 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896406 | orchestrator | 2025-04-10 01:00:11.896410 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.896415 | orchestrator | Thursday 10 April 2025 00:57:50 +0000 (0:00:00.513) 0:11:33.114 ******** 2025-04-10 01:00:11.896420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.896425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.896430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.896435 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896440 | orchestrator | 2025-04-10 01:00:11.896445 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.896449 | orchestrator | Thursday 10 April 2025 00:57:50 +0000 (0:00:00.471) 0:11:33.585 ******** 2025-04-10 01:00:11.896454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.896459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.896464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.896469 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896474 | orchestrator | 2025-04-10 01:00:11.896478 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.896483 | orchestrator | Thursday 10 April 2025 00:57:51 +0000 (0:00:00.446) 0:11:34.032 ******** 2025-04-10 01:00:11.896488 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896493 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896498 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896503 | orchestrator | 2025-04-10 01:00:11.896508 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.896513 | orchestrator | Thursday 10 April 2025 00:57:51 +0000 (0:00:00.358) 0:11:34.391 ******** 2025-04-10 01:00:11.896517 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.896522 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896527 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.896532 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896537 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.896541 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896546 | orchestrator | 2025-04-10 01:00:11.896551 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.896556 | orchestrator | Thursday 10 April 2025 00:57:52 +0000 (0:00:00.888) 0:11:35.279 ******** 2025-04-10 01:00:11.896561 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896565 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896570 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896575 | orchestrator | 2025-04-10 01:00:11.896580 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.896585 | orchestrator | Thursday 10 April 2025 00:57:53 +0000 (0:00:00.418) 0:11:35.698 ******** 2025-04-10 01:00:11.896589 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896594 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896599 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896604 | orchestrator | 2025-04-10 01:00:11.896609 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.896613 | orchestrator | Thursday 10 April 2025 00:57:53 +0000 (0:00:00.382) 0:11:36.080 ******** 2025-04-10 01:00:11.896618 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.896623 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896630 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.896635 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896640 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.896645 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896653 | orchestrator | 2025-04-10 01:00:11.896657 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.896662 | orchestrator | Thursday 10 April 2025 00:57:53 +0000 (0:00:00.482) 0:11:36.562 ******** 2025-04-10 01:00:11.896667 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.896672 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896677 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.896682 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896687 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.896692 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896696 | orchestrator | 2025-04-10 01:00:11.896701 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.896706 | orchestrator | Thursday 10 April 2025 00:57:54 +0000 (0:00:00.682) 0:11:37.245 ******** 2025-04-10 01:00:11.896711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.896716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.896721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.896725 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896730 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.896735 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.896740 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.896745 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896750 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.896755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.896760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.896764 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896769 | orchestrator | 2025-04-10 01:00:11.896774 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.896779 | orchestrator | Thursday 10 April 2025 00:57:55 +0000 (0:00:00.702) 0:11:37.947 ******** 2025-04-10 01:00:11.896784 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896789 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896794 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896799 | orchestrator | 2025-04-10 01:00:11.896803 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.896808 | orchestrator | Thursday 10 April 2025 00:57:56 +0000 (0:00:00.916) 0:11:38.864 ******** 2025-04-10 01:00:11.896813 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.896818 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896823 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.896828 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896833 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.896837 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896865 | orchestrator | 2025-04-10 01:00:11.896870 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.896875 | orchestrator | Thursday 10 April 2025 00:57:56 +0000 (0:00:00.670) 0:11:39.534 ******** 2025-04-10 01:00:11.896880 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896885 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896890 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896895 | orchestrator | 2025-04-10 01:00:11.896900 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.896907 | orchestrator | Thursday 10 April 2025 00:57:57 +0000 (0:00:00.905) 0:11:40.439 ******** 2025-04-10 01:00:11.896915 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.896920 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896925 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896930 | orchestrator | 2025-04-10 01:00:11.896935 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-04-10 01:00:11.896940 | orchestrator | Thursday 10 April 2025 00:57:58 +0000 (0:00:00.586) 0:11:41.025 ******** 2025-04-10 01:00:11.896945 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.896949 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.896954 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-04-10 01:00:11.896959 | orchestrator | 2025-04-10 01:00:11.896964 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-04-10 01:00:11.896969 | orchestrator | Thursday 10 April 2025 00:57:58 +0000 (0:00:00.421) 0:11:41.447 ******** 2025-04-10 01:00:11.896974 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.896979 | orchestrator | 2025-04-10 01:00:11.896984 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-04-10 01:00:11.896988 | orchestrator | Thursday 10 April 2025 00:58:01 +0000 (0:00:02.306) 0:11:43.754 ******** 2025-04-10 01:00:11.896994 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-04-10 01:00:11.897000 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897005 | orchestrator | 2025-04-10 01:00:11.897010 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-04-10 01:00:11.897017 | orchestrator | Thursday 10 April 2025 00:58:01 +0000 (0:00:00.416) 0:11:44.170 ******** 2025-04-10 01:00:11.897023 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:00:11.897029 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:00:11.897034 | orchestrator | 2025-04-10 01:00:11.897039 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-04-10 01:00:11.897044 | orchestrator | Thursday 10 April 2025 00:58:08 +0000 (0:00:06.722) 0:11:50.892 ******** 2025-04-10 01:00:11.897049 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:00:11.897053 | orchestrator | 2025-04-10 01:00:11.897058 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-04-10 01:00:11.897063 | orchestrator | Thursday 10 April 2025 00:58:11 +0000 (0:00:03.092) 0:11:53.984 ******** 2025-04-10 01:00:11.897068 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.897073 | orchestrator | 2025-04-10 01:00:11.897078 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-04-10 01:00:11.897083 | orchestrator | Thursday 10 April 2025 00:58:12 +0000 (0:00:00.930) 0:11:54.915 ******** 2025-04-10 01:00:11.897088 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-10 01:00:11.897092 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-10 01:00:11.897097 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-10 01:00:11.897102 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-04-10 01:00:11.897107 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-04-10 01:00:11.897112 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-04-10 01:00:11.897119 | orchestrator | 2025-04-10 01:00:11.897124 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-04-10 01:00:11.897129 | orchestrator | Thursday 10 April 2025 00:58:13 +0000 (0:00:01.092) 0:11:56.007 ******** 2025-04-10 01:00:11.897134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:00:11.897139 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.897144 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-10 01:00:11.897148 | orchestrator | 2025-04-10 01:00:11.897153 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-04-10 01:00:11.897158 | orchestrator | Thursday 10 April 2025 00:58:15 +0000 (0:00:01.797) 0:11:57.805 ******** 2025-04-10 01:00:11.897163 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-10 01:00:11.897168 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.897172 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897177 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-10 01:00:11.897182 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.897187 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897192 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-10 01:00:11.897196 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.897201 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897207 | orchestrator | 2025-04-10 01:00:11.897211 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-04-10 01:00:11.897216 | orchestrator | Thursday 10 April 2025 00:58:16 +0000 (0:00:01.200) 0:11:59.006 ******** 2025-04-10 01:00:11.897221 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897226 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897231 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897236 | orchestrator | 2025-04-10 01:00:11.897240 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-04-10 01:00:11.897245 | orchestrator | Thursday 10 April 2025 00:58:16 +0000 (0:00:00.629) 0:11:59.635 ******** 2025-04-10 01:00:11.897250 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.897255 | orchestrator | 2025-04-10 01:00:11.897260 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-04-10 01:00:11.897265 | orchestrator | Thursday 10 April 2025 00:58:17 +0000 (0:00:00.588) 0:12:00.224 ******** 2025-04-10 01:00:11.897270 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.897275 | orchestrator | 2025-04-10 01:00:11.897279 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-04-10 01:00:11.897284 | orchestrator | Thursday 10 April 2025 00:58:18 +0000 (0:00:00.826) 0:12:01.050 ******** 2025-04-10 01:00:11.897289 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897294 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897299 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897304 | orchestrator | 2025-04-10 01:00:11.897311 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-04-10 01:00:11.897316 | orchestrator | Thursday 10 April 2025 00:58:19 +0000 (0:00:01.208) 0:12:02.259 ******** 2025-04-10 01:00:11.897320 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897325 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897330 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897335 | orchestrator | 2025-04-10 01:00:11.897342 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-04-10 01:00:11.897347 | orchestrator | Thursday 10 April 2025 00:58:20 +0000 (0:00:01.296) 0:12:03.556 ******** 2025-04-10 01:00:11.897352 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897356 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897361 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897369 | orchestrator | 2025-04-10 01:00:11.897374 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-04-10 01:00:11.897379 | orchestrator | Thursday 10 April 2025 00:58:23 +0000 (0:00:02.617) 0:12:06.173 ******** 2025-04-10 01:00:11.897383 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897388 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897393 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897398 | orchestrator | 2025-04-10 01:00:11.897403 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-04-10 01:00:11.897408 | orchestrator | Thursday 10 April 2025 00:58:25 +0000 (0:00:01.953) 0:12:08.126 ******** 2025-04-10 01:00:11.897413 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-04-10 01:00:11.897417 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-04-10 01:00:11.897422 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-04-10 01:00:11.897427 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.897432 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.897437 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.897442 | orchestrator | 2025-04-10 01:00:11.897447 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-10 01:00:11.897451 | orchestrator | Thursday 10 April 2025 00:58:42 +0000 (0:00:17.099) 0:12:25.225 ******** 2025-04-10 01:00:11.897456 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897461 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897466 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897471 | orchestrator | 2025-04-10 01:00:11.897476 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-10 01:00:11.897480 | orchestrator | Thursday 10 April 2025 00:58:43 +0000 (0:00:00.695) 0:12:25.921 ******** 2025-04-10 01:00:11.897485 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.897490 | orchestrator | 2025-04-10 01:00:11.897495 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-10 01:00:11.897500 | orchestrator | Thursday 10 April 2025 00:58:44 +0000 (0:00:00.857) 0:12:26.778 ******** 2025-04-10 01:00:11.897505 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.897510 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.897514 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.897519 | orchestrator | 2025-04-10 01:00:11.897524 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-10 01:00:11.897529 | orchestrator | Thursday 10 April 2025 00:58:44 +0000 (0:00:00.367) 0:12:27.146 ******** 2025-04-10 01:00:11.897534 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897539 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897543 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897548 | orchestrator | 2025-04-10 01:00:11.897553 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-10 01:00:11.897558 | orchestrator | Thursday 10 April 2025 00:58:45 +0000 (0:00:01.180) 0:12:28.326 ******** 2025-04-10 01:00:11.897563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.897567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.897572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.897577 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897582 | orchestrator | 2025-04-10 01:00:11.897587 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-10 01:00:11.897592 | orchestrator | Thursday 10 April 2025 00:58:46 +0000 (0:00:01.267) 0:12:29.593 ******** 2025-04-10 01:00:11.897596 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.897601 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.897606 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.897611 | orchestrator | 2025-04-10 01:00:11.897616 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.897623 | orchestrator | Thursday 10 April 2025 00:58:47 +0000 (0:00:00.359) 0:12:29.952 ******** 2025-04-10 01:00:11.897628 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.897633 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.897638 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.897643 | orchestrator | 2025-04-10 01:00:11.897647 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-10 01:00:11.897652 | orchestrator | 2025-04-10 01:00:11.897657 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-10 01:00:11.897662 | orchestrator | Thursday 10 April 2025 00:58:49 +0000 (0:00:02.114) 0:12:32.067 ******** 2025-04-10 01:00:11.897667 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.897674 | orchestrator | 2025-04-10 01:00:11.897679 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-10 01:00:11.897684 | orchestrator | Thursday 10 April 2025 00:58:50 +0000 (0:00:00.838) 0:12:32.906 ******** 2025-04-10 01:00:11.897689 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897693 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897698 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897703 | orchestrator | 2025-04-10 01:00:11.897708 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-10 01:00:11.897713 | orchestrator | Thursday 10 April 2025 00:58:50 +0000 (0:00:00.353) 0:12:33.260 ******** 2025-04-10 01:00:11.897718 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.897723 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.897730 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.897735 | orchestrator | 2025-04-10 01:00:11.897744 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-10 01:00:11.897749 | orchestrator | Thursday 10 April 2025 00:58:51 +0000 (0:00:00.722) 0:12:33.982 ******** 2025-04-10 01:00:11.897754 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.897759 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.897764 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.897768 | orchestrator | 2025-04-10 01:00:11.897773 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-10 01:00:11.897778 | orchestrator | Thursday 10 April 2025 00:58:52 +0000 (0:00:01.051) 0:12:35.033 ******** 2025-04-10 01:00:11.897783 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.897788 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.897793 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.897798 | orchestrator | 2025-04-10 01:00:11.897802 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-10 01:00:11.897807 | orchestrator | Thursday 10 April 2025 00:58:53 +0000 (0:00:00.703) 0:12:35.736 ******** 2025-04-10 01:00:11.897815 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897820 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897825 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897830 | orchestrator | 2025-04-10 01:00:11.897835 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-10 01:00:11.897851 | orchestrator | Thursday 10 April 2025 00:58:53 +0000 (0:00:00.335) 0:12:36.072 ******** 2025-04-10 01:00:11.897859 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897864 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897869 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897874 | orchestrator | 2025-04-10 01:00:11.897878 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-10 01:00:11.897883 | orchestrator | Thursday 10 April 2025 00:58:53 +0000 (0:00:00.305) 0:12:36.378 ******** 2025-04-10 01:00:11.897888 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897893 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897898 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897903 | orchestrator | 2025-04-10 01:00:11.897908 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-10 01:00:11.897916 | orchestrator | Thursday 10 April 2025 00:58:54 +0000 (0:00:00.605) 0:12:36.984 ******** 2025-04-10 01:00:11.897921 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897925 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897930 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897935 | orchestrator | 2025-04-10 01:00:11.897940 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-10 01:00:11.897945 | orchestrator | Thursday 10 April 2025 00:58:54 +0000 (0:00:00.320) 0:12:37.304 ******** 2025-04-10 01:00:11.897950 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897955 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897960 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897965 | orchestrator | 2025-04-10 01:00:11.897970 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-10 01:00:11.897975 | orchestrator | Thursday 10 April 2025 00:58:54 +0000 (0:00:00.303) 0:12:37.607 ******** 2025-04-10 01:00:11.897979 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.897984 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.897989 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.897994 | orchestrator | 2025-04-10 01:00:11.897999 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-10 01:00:11.898003 | orchestrator | Thursday 10 April 2025 00:58:55 +0000 (0:00:00.353) 0:12:37.961 ******** 2025-04-10 01:00:11.898008 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.898027 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.898033 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.898038 | orchestrator | 2025-04-10 01:00:11.898043 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-10 01:00:11.898048 | orchestrator | Thursday 10 April 2025 00:58:56 +0000 (0:00:01.117) 0:12:39.079 ******** 2025-04-10 01:00:11.898053 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898058 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898063 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898067 | orchestrator | 2025-04-10 01:00:11.898072 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-10 01:00:11.898077 | orchestrator | Thursday 10 April 2025 00:58:56 +0000 (0:00:00.370) 0:12:39.450 ******** 2025-04-10 01:00:11.898082 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898087 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898092 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898097 | orchestrator | 2025-04-10 01:00:11.898102 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-10 01:00:11.898107 | orchestrator | Thursday 10 April 2025 00:58:57 +0000 (0:00:00.341) 0:12:39.791 ******** 2025-04-10 01:00:11.898111 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.898116 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.898121 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.898126 | orchestrator | 2025-04-10 01:00:11.898131 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-10 01:00:11.898136 | orchestrator | Thursday 10 April 2025 00:58:57 +0000 (0:00:00.354) 0:12:40.146 ******** 2025-04-10 01:00:11.898140 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.898145 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.898150 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.898155 | orchestrator | 2025-04-10 01:00:11.898160 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-10 01:00:11.898164 | orchestrator | Thursday 10 April 2025 00:58:58 +0000 (0:00:00.681) 0:12:40.827 ******** 2025-04-10 01:00:11.898169 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.898174 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.898179 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.898184 | orchestrator | 2025-04-10 01:00:11.898188 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-10 01:00:11.898193 | orchestrator | Thursday 10 April 2025 00:58:58 +0000 (0:00:00.374) 0:12:41.202 ******** 2025-04-10 01:00:11.898203 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898208 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898212 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898217 | orchestrator | 2025-04-10 01:00:11.898222 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-10 01:00:11.898229 | orchestrator | Thursday 10 April 2025 00:58:58 +0000 (0:00:00.346) 0:12:41.548 ******** 2025-04-10 01:00:11.898235 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898240 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898245 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898249 | orchestrator | 2025-04-10 01:00:11.898257 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-10 01:00:11.898261 | orchestrator | Thursday 10 April 2025 00:58:59 +0000 (0:00:00.320) 0:12:41.869 ******** 2025-04-10 01:00:11.898266 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898271 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898279 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898284 | orchestrator | 2025-04-10 01:00:11.898289 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-10 01:00:11.898294 | orchestrator | Thursday 10 April 2025 00:58:59 +0000 (0:00:00.689) 0:12:42.558 ******** 2025-04-10 01:00:11.898298 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.898303 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.898308 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.898313 | orchestrator | 2025-04-10 01:00:11.898318 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-10 01:00:11.898323 | orchestrator | Thursday 10 April 2025 00:59:00 +0000 (0:00:00.361) 0:12:42.919 ******** 2025-04-10 01:00:11.898328 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898333 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898338 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898343 | orchestrator | 2025-04-10 01:00:11.898347 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-10 01:00:11.898352 | orchestrator | Thursday 10 April 2025 00:59:00 +0000 (0:00:00.365) 0:12:43.285 ******** 2025-04-10 01:00:11.898357 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898362 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898367 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898372 | orchestrator | 2025-04-10 01:00:11.898377 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-10 01:00:11.898381 | orchestrator | Thursday 10 April 2025 00:59:00 +0000 (0:00:00.321) 0:12:43.607 ******** 2025-04-10 01:00:11.898386 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898391 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898396 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898401 | orchestrator | 2025-04-10 01:00:11.898406 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-10 01:00:11.898410 | orchestrator | Thursday 10 April 2025 00:59:01 +0000 (0:00:00.688) 0:12:44.295 ******** 2025-04-10 01:00:11.898415 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898420 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898425 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898430 | orchestrator | 2025-04-10 01:00:11.898435 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-10 01:00:11.898440 | orchestrator | Thursday 10 April 2025 00:59:01 +0000 (0:00:00.382) 0:12:44.678 ******** 2025-04-10 01:00:11.898445 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898450 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898454 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898459 | orchestrator | 2025-04-10 01:00:11.898464 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-10 01:00:11.898469 | orchestrator | Thursday 10 April 2025 00:59:02 +0000 (0:00:00.367) 0:12:45.047 ******** 2025-04-10 01:00:11.898477 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898482 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898487 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898492 | orchestrator | 2025-04-10 01:00:11.898497 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-10 01:00:11.898502 | orchestrator | Thursday 10 April 2025 00:59:02 +0000 (0:00:00.336) 0:12:45.385 ******** 2025-04-10 01:00:11.898507 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898512 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898516 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898521 | orchestrator | 2025-04-10 01:00:11.898526 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-10 01:00:11.898532 | orchestrator | Thursday 10 April 2025 00:59:03 +0000 (0:00:00.820) 0:12:46.206 ******** 2025-04-10 01:00:11.898536 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898541 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898546 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898551 | orchestrator | 2025-04-10 01:00:11.898556 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-10 01:00:11.898561 | orchestrator | Thursday 10 April 2025 00:59:03 +0000 (0:00:00.395) 0:12:46.602 ******** 2025-04-10 01:00:11.898566 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898571 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898576 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898581 | orchestrator | 2025-04-10 01:00:11.898586 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-10 01:00:11.898590 | orchestrator | Thursday 10 April 2025 00:59:04 +0000 (0:00:00.352) 0:12:46.955 ******** 2025-04-10 01:00:11.898595 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898600 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898605 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898610 | orchestrator | 2025-04-10 01:00:11.898615 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-10 01:00:11.898620 | orchestrator | Thursday 10 April 2025 00:59:04 +0000 (0:00:00.388) 0:12:47.343 ******** 2025-04-10 01:00:11.898625 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898630 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898634 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898639 | orchestrator | 2025-04-10 01:00:11.898644 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-10 01:00:11.898649 | orchestrator | Thursday 10 April 2025 00:59:05 +0000 (0:00:00.696) 0:12:48.040 ******** 2025-04-10 01:00:11.898654 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898659 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898666 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898671 | orchestrator | 2025-04-10 01:00:11.898676 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-10 01:00:11.898681 | orchestrator | Thursday 10 April 2025 00:59:05 +0000 (0:00:00.361) 0:12:48.401 ******** 2025-04-10 01:00:11.898686 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.898691 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-10 01:00:11.898695 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898700 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.898705 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-10 01:00:11.898710 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898715 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.898720 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-10 01:00:11.898725 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898730 | orchestrator | 2025-04-10 01:00:11.898734 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-10 01:00:11.898742 | orchestrator | Thursday 10 April 2025 00:59:06 +0000 (0:00:00.400) 0:12:48.802 ******** 2025-04-10 01:00:11.898747 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-10 01:00:11.898755 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-10 01:00:11.898760 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898765 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-10 01:00:11.898770 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-10 01:00:11.898774 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898779 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-10 01:00:11.898784 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-10 01:00:11.898789 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898794 | orchestrator | 2025-04-10 01:00:11.898798 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-10 01:00:11.898803 | orchestrator | Thursday 10 April 2025 00:59:06 +0000 (0:00:00.429) 0:12:49.231 ******** 2025-04-10 01:00:11.898808 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898813 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898818 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898823 | orchestrator | 2025-04-10 01:00:11.898828 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-10 01:00:11.898832 | orchestrator | Thursday 10 April 2025 00:59:07 +0000 (0:00:00.681) 0:12:49.913 ******** 2025-04-10 01:00:11.898837 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898853 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898861 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898865 | orchestrator | 2025-04-10 01:00:11.898870 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:00:11.898875 | orchestrator | Thursday 10 April 2025 00:59:07 +0000 (0:00:00.380) 0:12:50.293 ******** 2025-04-10 01:00:11.898880 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898885 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898890 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898895 | orchestrator | 2025-04-10 01:00:11.898900 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:00:11.898904 | orchestrator | Thursday 10 April 2025 00:59:07 +0000 (0:00:00.377) 0:12:50.671 ******** 2025-04-10 01:00:11.898909 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898914 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898919 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898923 | orchestrator | 2025-04-10 01:00:11.898928 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:00:11.898933 | orchestrator | Thursday 10 April 2025 00:59:08 +0000 (0:00:00.339) 0:12:51.011 ******** 2025-04-10 01:00:11.898938 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898943 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898947 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898952 | orchestrator | 2025-04-10 01:00:11.898957 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:00:11.898962 | orchestrator | Thursday 10 April 2025 00:59:08 +0000 (0:00:00.653) 0:12:51.664 ******** 2025-04-10 01:00:11.898967 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.898972 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.898976 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.898981 | orchestrator | 2025-04-10 01:00:11.898986 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:00:11.898991 | orchestrator | Thursday 10 April 2025 00:59:09 +0000 (0:00:00.363) 0:12:52.028 ******** 2025-04-10 01:00:11.898996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.899001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.899008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.899013 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899018 | orchestrator | 2025-04-10 01:00:11.899023 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:00:11.899028 | orchestrator | Thursday 10 April 2025 00:59:09 +0000 (0:00:00.449) 0:12:52.478 ******** 2025-04-10 01:00:11.899033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.899037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.899042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.899047 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899052 | orchestrator | 2025-04-10 01:00:11.899057 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:00:11.899062 | orchestrator | Thursday 10 April 2025 00:59:10 +0000 (0:00:00.448) 0:12:52.926 ******** 2025-04-10 01:00:11.899067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.899072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.899079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.899084 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899089 | orchestrator | 2025-04-10 01:00:11.899093 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.899098 | orchestrator | Thursday 10 April 2025 00:59:10 +0000 (0:00:00.444) 0:12:53.371 ******** 2025-04-10 01:00:11.899103 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899108 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899113 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899118 | orchestrator | 2025-04-10 01:00:11.899122 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:00:11.899127 | orchestrator | Thursday 10 April 2025 00:59:11 +0000 (0:00:00.336) 0:12:53.707 ******** 2025-04-10 01:00:11.899132 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.899137 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899142 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.899146 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899151 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.899156 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899161 | orchestrator | 2025-04-10 01:00:11.899165 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:00:11.899170 | orchestrator | Thursday 10 April 2025 00:59:11 +0000 (0:00:00.846) 0:12:54.554 ******** 2025-04-10 01:00:11.899175 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899180 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899185 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899190 | orchestrator | 2025-04-10 01:00:11.899194 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:00:11.899199 | orchestrator | Thursday 10 April 2025 00:59:12 +0000 (0:00:00.360) 0:12:54.915 ******** 2025-04-10 01:00:11.899204 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899209 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899214 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899219 | orchestrator | 2025-04-10 01:00:11.899223 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:00:11.899228 | orchestrator | Thursday 10 April 2025 00:59:12 +0000 (0:00:00.330) 0:12:55.245 ******** 2025-04-10 01:00:11.899233 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:00:11.899238 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899243 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:00:11.899247 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899252 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:00:11.899257 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899265 | orchestrator | 2025-04-10 01:00:11.899270 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:00:11.899274 | orchestrator | Thursday 10 April 2025 00:59:13 +0000 (0:00:00.462) 0:12:55.707 ******** 2025-04-10 01:00:11.899279 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.899287 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899292 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.899297 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899302 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:00:11.899307 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899312 | orchestrator | 2025-04-10 01:00:11.899316 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:00:11.899321 | orchestrator | Thursday 10 April 2025 00:59:13 +0000 (0:00:00.681) 0:12:56.389 ******** 2025-04-10 01:00:11.899326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.899331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.899336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.899340 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899345 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:00:11.899350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:00:11.899355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:00:11.899360 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899365 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:00:11.899369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:00:11.899375 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:00:11.899379 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899384 | orchestrator | 2025-04-10 01:00:11.899389 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-10 01:00:11.899394 | orchestrator | Thursday 10 April 2025 00:59:14 +0000 (0:00:00.642) 0:12:57.032 ******** 2025-04-10 01:00:11.899399 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899404 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899409 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899413 | orchestrator | 2025-04-10 01:00:11.899418 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-10 01:00:11.899423 | orchestrator | Thursday 10 April 2025 00:59:15 +0000 (0:00:00.841) 0:12:57.873 ******** 2025-04-10 01:00:11.899428 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.899436 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899440 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.899445 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899450 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.899455 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899460 | orchestrator | 2025-04-10 01:00:11.899466 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-10 01:00:11.899471 | orchestrator | Thursday 10 April 2025 00:59:15 +0000 (0:00:00.631) 0:12:58.505 ******** 2025-04-10 01:00:11.899476 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899481 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899486 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899491 | orchestrator | 2025-04-10 01:00:11.899495 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-10 01:00:11.899500 | orchestrator | Thursday 10 April 2025 00:59:16 +0000 (0:00:00.873) 0:12:59.379 ******** 2025-04-10 01:00:11.899508 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899513 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899517 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899522 | orchestrator | 2025-04-10 01:00:11.899527 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-04-10 01:00:11.899532 | orchestrator | Thursday 10 April 2025 00:59:17 +0000 (0:00:00.603) 0:12:59.982 ******** 2025-04-10 01:00:11.899537 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.899542 | orchestrator | 2025-04-10 01:00:11.899546 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-04-10 01:00:11.899551 | orchestrator | Thursday 10 April 2025 00:59:18 +0000 (0:00:00.873) 0:13:00.856 ******** 2025-04-10 01:00:11.899556 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-04-10 01:00:11.899561 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-04-10 01:00:11.899566 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-04-10 01:00:11.899571 | orchestrator | 2025-04-10 01:00:11.899575 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-04-10 01:00:11.899580 | orchestrator | Thursday 10 April 2025 00:59:18 +0000 (0:00:00.704) 0:13:01.561 ******** 2025-04-10 01:00:11.899585 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:00:11.899590 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.899595 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-10 01:00:11.899599 | orchestrator | 2025-04-10 01:00:11.899604 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-04-10 01:00:11.899609 | orchestrator | Thursday 10 April 2025 00:59:20 +0000 (0:00:01.938) 0:13:03.499 ******** 2025-04-10 01:00:11.899614 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-10 01:00:11.899619 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-10 01:00:11.899624 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.899629 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-10 01:00:11.899633 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-10 01:00:11.899638 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.899643 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-10 01:00:11.899648 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-10 01:00:11.899653 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.899658 | orchestrator | 2025-04-10 01:00:11.899663 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-04-10 01:00:11.899668 | orchestrator | Thursday 10 April 2025 00:59:22 +0000 (0:00:01.254) 0:13:04.753 ******** 2025-04-10 01:00:11.899673 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899678 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899683 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899688 | orchestrator | 2025-04-10 01:00:11.899692 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-04-10 01:00:11.899697 | orchestrator | Thursday 10 April 2025 00:59:22 +0000 (0:00:00.666) 0:13:05.420 ******** 2025-04-10 01:00:11.899702 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899707 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899712 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899716 | orchestrator | 2025-04-10 01:00:11.899721 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-04-10 01:00:11.899726 | orchestrator | Thursday 10 April 2025 00:59:23 +0000 (0:00:00.345) 0:13:05.765 ******** 2025-04-10 01:00:11.899731 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-04-10 01:00:11.899736 | orchestrator | 2025-04-10 01:00:11.899741 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-04-10 01:00:11.899745 | orchestrator | Thursday 10 April 2025 00:59:23 +0000 (0:00:00.243) 0:13:06.008 ******** 2025-04-10 01:00:11.899753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899781 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899786 | orchestrator | 2025-04-10 01:00:11.899790 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-04-10 01:00:11.899795 | orchestrator | Thursday 10 April 2025 00:59:24 +0000 (0:00:00.927) 0:13:06.935 ******** 2025-04-10 01:00:11.899800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899829 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899834 | orchestrator | 2025-04-10 01:00:11.899849 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-04-10 01:00:11.899854 | orchestrator | Thursday 10 April 2025 00:59:25 +0000 (0:00:00.956) 0:13:07.892 ******** 2025-04-10 01:00:11.899859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-10 01:00:11.899883 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899888 | orchestrator | 2025-04-10 01:00:11.899893 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-04-10 01:00:11.899898 | orchestrator | Thursday 10 April 2025 00:59:25 +0000 (0:00:00.717) 0:13:08.610 ******** 2025-04-10 01:00:11.899903 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-10 01:00:11.899908 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-10 01:00:11.899913 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-10 01:00:11.899918 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-10 01:00:11.899928 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-10 01:00:11.899932 | orchestrator | 2025-04-10 01:00:11.899937 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-04-10 01:00:11.899942 | orchestrator | Thursday 10 April 2025 00:59:52 +0000 (0:00:26.589) 0:13:35.199 ******** 2025-04-10 01:00:11.899947 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899952 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899957 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899961 | orchestrator | 2025-04-10 01:00:11.899966 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-04-10 01:00:11.899971 | orchestrator | Thursday 10 April 2025 00:59:53 +0000 (0:00:00.509) 0:13:35.709 ******** 2025-04-10 01:00:11.899976 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.899981 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.899986 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.899990 | orchestrator | 2025-04-10 01:00:11.899997 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-04-10 01:00:11.900002 | orchestrator | Thursday 10 April 2025 00:59:53 +0000 (0:00:00.360) 0:13:36.070 ******** 2025-04-10 01:00:11.900007 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.900012 | orchestrator | 2025-04-10 01:00:11.900017 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-04-10 01:00:11.900022 | orchestrator | Thursday 10 April 2025 00:59:53 +0000 (0:00:00.574) 0:13:36.645 ******** 2025-04-10 01:00:11.900027 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.900031 | orchestrator | 2025-04-10 01:00:11.900036 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-04-10 01:00:11.900041 | orchestrator | Thursday 10 April 2025 00:59:54 +0000 (0:00:00.827) 0:13:37.473 ******** 2025-04-10 01:00:11.900046 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.900051 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.900055 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.900060 | orchestrator | 2025-04-10 01:00:11.900065 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-04-10 01:00:11.900070 | orchestrator | Thursday 10 April 2025 00:59:56 +0000 (0:00:01.286) 0:13:38.759 ******** 2025-04-10 01:00:11.900075 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.900080 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.900086 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.900091 | orchestrator | 2025-04-10 01:00:11.900096 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-04-10 01:00:11.900101 | orchestrator | Thursday 10 April 2025 00:59:57 +0000 (0:00:01.188) 0:13:39.947 ******** 2025-04-10 01:00:11.900106 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.900111 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.900116 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.900120 | orchestrator | 2025-04-10 01:00:11.900125 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-04-10 01:00:11.900130 | orchestrator | Thursday 10 April 2025 00:59:59 +0000 (0:00:02.007) 0:13:41.955 ******** 2025-04-10 01:00:11.900135 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.900140 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.900145 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-10 01:00:11.900153 | orchestrator | 2025-04-10 01:00:11.900158 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-04-10 01:00:11.900163 | orchestrator | Thursday 10 April 2025 01:00:01 +0000 (0:00:01.965) 0:13:43.920 ******** 2025-04-10 01:00:11.900167 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.900172 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:00:11.900177 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:00:11.900182 | orchestrator | 2025-04-10 01:00:11.900187 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-10 01:00:11.900191 | orchestrator | Thursday 10 April 2025 01:00:02 +0000 (0:00:01.526) 0:13:45.447 ******** 2025-04-10 01:00:11.900196 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.900201 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.900206 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.900211 | orchestrator | 2025-04-10 01:00:11.900215 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-10 01:00:11.900220 | orchestrator | Thursday 10 April 2025 01:00:03 +0000 (0:00:00.705) 0:13:46.152 ******** 2025-04-10 01:00:11.900225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:00:11.900230 | orchestrator | 2025-04-10 01:00:11.900235 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-10 01:00:11.900240 | orchestrator | Thursday 10 April 2025 01:00:04 +0000 (0:00:00.868) 0:13:47.021 ******** 2025-04-10 01:00:11.900245 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.900250 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.900254 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.900259 | orchestrator | 2025-04-10 01:00:11.900264 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-10 01:00:11.900269 | orchestrator | Thursday 10 April 2025 01:00:04 +0000 (0:00:00.358) 0:13:47.379 ******** 2025-04-10 01:00:11.900274 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.900279 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.900283 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.900288 | orchestrator | 2025-04-10 01:00:11.900293 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-10 01:00:11.900298 | orchestrator | Thursday 10 April 2025 01:00:05 +0000 (0:00:01.274) 0:13:48.653 ******** 2025-04-10 01:00:11.900303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:00:11.900308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:00:11.900312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:00:11.900317 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:00:11.900322 | orchestrator | 2025-04-10 01:00:11.900327 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-10 01:00:11.900331 | orchestrator | Thursday 10 April 2025 01:00:06 +0000 (0:00:01.007) 0:13:49.661 ******** 2025-04-10 01:00:11.900336 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:00:11.900341 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:00:11.900346 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:00:11.900351 | orchestrator | 2025-04-10 01:00:11.900356 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-10 01:00:11.900361 | orchestrator | Thursday 10 April 2025 01:00:07 +0000 (0:00:00.356) 0:13:50.018 ******** 2025-04-10 01:00:11.900365 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:00:11.900370 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:00:11.900375 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:00:11.900380 | orchestrator | 2025-04-10 01:00:11.900385 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:00:11.900390 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-04-10 01:00:11.900395 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-04-10 01:00:11.900403 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-04-10 01:00:11.900408 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-04-10 01:00:11.900413 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-04-10 01:00:11.900419 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-04-10 01:00:14.901154 | orchestrator | 2025-04-10 01:00:14.901289 | orchestrator | 2025-04-10 01:00:14.901308 | orchestrator | 2025-04-10 01:00:14.901322 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:00:14.901337 | orchestrator | Thursday 10 April 2025 01:00:08 +0000 (0:00:01.355) 0:13:51.373 ******** 2025-04-10 01:00:14.901350 | orchestrator | =============================================================================== 2025-04-10 01:00:14.901382 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 38.64s 2025-04-10 01:00:14.901401 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 30.48s 2025-04-10 01:00:14.901421 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 26.59s 2025-04-10 01:00:14.901434 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.53s 2025-04-10 01:00:14.901447 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.10s 2025-04-10 01:00:14.901459 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.89s 2025-04-10 01:00:14.901472 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.64s 2025-04-10 01:00:14.901485 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 8.36s 2025-04-10 01:00:14.901497 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.19s 2025-04-10 01:00:14.901510 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.33s 2025-04-10 01:00:14.901522 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.72s 2025-04-10 01:00:14.901534 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.60s 2025-04-10 01:00:14.901547 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.15s 2025-04-10 01:00:14.901559 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.79s 2025-04-10 01:00:14.901572 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 5.26s 2025-04-10 01:00:14.901584 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.04s 2025-04-10 01:00:14.901597 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 5.03s 2025-04-10 01:00:14.901609 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.77s 2025-04-10 01:00:14.901622 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.58s 2025-04-10 01:00:14.901634 | orchestrator | ceph-container-common : get ceph version -------------------------------- 3.47s 2025-04-10 01:00:14.901647 | orchestrator | 2025-04-10 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:14.901678 | orchestrator | 2025-04-10 01:00:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:14.902979 | orchestrator | 2025-04-10 01:00:14 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:14.903738 | orchestrator | 2025-04-10 01:00:14 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:17.959680 | orchestrator | 2025-04-10 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:17.959817 | orchestrator | 2025-04-10 01:00:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:17.962767 | orchestrator | 2025-04-10 01:00:17 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:17.965306 | orchestrator | 2025-04-10 01:00:17 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:21.027913 | orchestrator | 2025-04-10 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:21.028063 | orchestrator | 2025-04-10 01:00:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:21.029562 | orchestrator | 2025-04-10 01:00:21 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:21.029599 | orchestrator | 2025-04-10 01:00:21 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:21.029802 | orchestrator | 2025-04-10 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:24.075420 | orchestrator | 2025-04-10 01:00:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:24.079217 | orchestrator | 2025-04-10 01:00:24 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:24.082270 | orchestrator | 2025-04-10 01:00:24 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:24.083293 | orchestrator | 2025-04-10 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:27.124639 | orchestrator | 2025-04-10 01:00:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:27.125297 | orchestrator | 2025-04-10 01:00:27 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:27.128379 | orchestrator | 2025-04-10 01:00:27 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:30.179947 | orchestrator | 2025-04-10 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:30.180080 | orchestrator | 2025-04-10 01:00:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:30.183154 | orchestrator | 2025-04-10 01:00:30 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:30.184460 | orchestrator | 2025-04-10 01:00:30 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:30.185478 | orchestrator | 2025-04-10 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:33.231303 | orchestrator | 2025-04-10 01:00:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:36.288823 | orchestrator | 2025-04-10 01:00:33 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:36.289013 | orchestrator | 2025-04-10 01:00:33 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:36.289031 | orchestrator | 2025-04-10 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:36.289060 | orchestrator | 2025-04-10 01:00:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:36.289937 | orchestrator | 2025-04-10 01:00:36 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state STARTED 2025-04-10 01:00:36.292127 | orchestrator | 2025-04-10 01:00:36 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:39.335201 | orchestrator | 2025-04-10 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:39.335490 | orchestrator | 2025-04-10 01:00:39 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:39.335528 | orchestrator | 2025-04-10 01:00:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:39.337538 | orchestrator | 2025-04-10 01:00:39 | INFO  | Task 53e07cbe-62d9-4334-aea0-4c2bd7e36d87 is in state SUCCESS 2025-04-10 01:00:39.339639 | orchestrator | 2025-04-10 01:00:39.339682 | orchestrator | 2025-04-10 01:00:39.339740 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-04-10 01:00:39.339755 | orchestrator | 2025-04-10 01:00:39.339770 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-10 01:00:39.339784 | orchestrator | Thursday 10 April 2025 00:57:01 +0000 (0:00:00.190) 0:00:00.190 ******** 2025-04-10 01:00:39.339798 | orchestrator | ok: [localhost] => { 2025-04-10 01:00:39.339814 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-04-10 01:00:39.339829 | orchestrator | } 2025-04-10 01:00:39.339893 | orchestrator | 2025-04-10 01:00:39.339910 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-04-10 01:00:39.339925 | orchestrator | Thursday 10 April 2025 00:57:01 +0000 (0:00:00.048) 0:00:00.238 ******** 2025-04-10 01:00:39.339939 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-04-10 01:00:39.339954 | orchestrator | ...ignoring 2025-04-10 01:00:39.339969 | orchestrator | 2025-04-10 01:00:39.339983 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-04-10 01:00:39.339997 | orchestrator | Thursday 10 April 2025 00:57:04 +0000 (0:00:02.548) 0:00:02.786 ******** 2025-04-10 01:00:39.340011 | orchestrator | skipping: [localhost] 2025-04-10 01:00:39.340026 | orchestrator | 2025-04-10 01:00:39.340040 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-04-10 01:00:39.340054 | orchestrator | Thursday 10 April 2025 00:57:04 +0000 (0:00:00.067) 0:00:02.853 ******** 2025-04-10 01:00:39.340068 | orchestrator | ok: [localhost] 2025-04-10 01:00:39.340082 | orchestrator | 2025-04-10 01:00:39.340096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:00:39.340110 | orchestrator | 2025-04-10 01:00:39.340124 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:00:39.340138 | orchestrator | Thursday 10 April 2025 00:57:04 +0000 (0:00:00.153) 0:00:03.007 ******** 2025-04-10 01:00:39.340152 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.340166 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.340180 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.340194 | orchestrator | 2025-04-10 01:00:39.340219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:00:39.340234 | orchestrator | Thursday 10 April 2025 00:57:04 +0000 (0:00:00.440) 0:00:03.447 ******** 2025-04-10 01:00:39.340248 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-10 01:00:39.340271 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-10 01:00:39.340288 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-10 01:00:39.340304 | orchestrator | 2025-04-10 01:00:39.340321 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-10 01:00:39.340337 | orchestrator | 2025-04-10 01:00:39.340352 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-10 01:00:39.340368 | orchestrator | Thursday 10 April 2025 00:57:05 +0000 (0:00:00.434) 0:00:03.882 ******** 2025-04-10 01:00:39.340384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:00:39.340400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-10 01:00:39.340415 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-10 01:00:39.340431 | orchestrator | 2025-04-10 01:00:39.340447 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-10 01:00:39.340478 | orchestrator | Thursday 10 April 2025 00:57:05 +0000 (0:00:00.696) 0:00:04.578 ******** 2025-04-10 01:00:39.340495 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:39.340517 | orchestrator | 2025-04-10 01:00:39.340537 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-04-10 01:00:39.340552 | orchestrator | Thursday 10 April 2025 00:57:06 +0000 (0:00:00.662) 0:00:05.241 ******** 2025-04-10 01:00:39.340586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.340609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.340633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.340658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.340675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.340691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.340705 | orchestrator | 2025-04-10 01:00:39.340720 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-04-10 01:00:39.340741 | orchestrator | Thursday 10 April 2025 00:57:10 +0000 (0:00:04.416) 0:00:09.658 ******** 2025-04-10 01:00:39.340755 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.340770 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.340790 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.340804 | orchestrator | 2025-04-10 01:00:39.340819 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-04-10 01:00:39.340833 | orchestrator | Thursday 10 April 2025 00:57:11 +0000 (0:00:00.840) 0:00:10.498 ******** 2025-04-10 01:00:39.340875 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.340890 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.340904 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.340918 | orchestrator | 2025-04-10 01:00:39.340932 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-04-10 01:00:39.340946 | orchestrator | Thursday 10 April 2025 00:57:13 +0000 (0:00:01.654) 0:00:12.152 ******** 2025-04-10 01:00:39.340974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.340992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.341015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.341038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.341054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.341069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.341090 | orchestrator | 2025-04-10 01:00:39.341105 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-04-10 01:00:39.341119 | orchestrator | Thursday 10 April 2025 00:57:20 +0000 (0:00:06.972) 0:00:19.125 ******** 2025-04-10 01:00:39.341137 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.341157 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.341171 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.341185 | orchestrator | 2025-04-10 01:00:39.341199 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-04-10 01:00:39.341213 | orchestrator | Thursday 10 April 2025 00:57:21 +0000 (0:00:01.276) 0:00:20.402 ******** 2025-04-10 01:00:39.341227 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.341241 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:39.341255 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:39.341269 | orchestrator | 2025-04-10 01:00:39.341283 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-04-10 01:00:39.341296 | orchestrator | Thursday 10 April 2025 00:57:32 +0000 (0:00:10.856) 0:00:31.258 ******** 2025-04-10 01:00:39.341319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.341336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.341360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-10 01:00:39.341383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.341399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.341420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-10 01:00:39.341435 | orchestrator | 2025-04-10 01:00:39.341449 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-04-10 01:00:39.341463 | orchestrator | Thursday 10 April 2025 00:57:37 +0000 (0:00:04.985) 0:00:36.244 ******** 2025-04-10 01:00:39.341477 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.341491 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:39.341505 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:39.341519 | orchestrator | 2025-04-10 01:00:39.341533 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-04-10 01:00:39.341547 | orchestrator | Thursday 10 April 2025 00:57:38 +0000 (0:00:01.134) 0:00:37.379 ******** 2025-04-10 01:00:39.341561 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.341575 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.341589 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.341603 | orchestrator | 2025-04-10 01:00:39.341617 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-04-10 01:00:39.341631 | orchestrator | Thursday 10 April 2025 00:57:39 +0000 (0:00:00.519) 0:00:37.899 ******** 2025-04-10 01:00:39.341645 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.341659 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.341673 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.341687 | orchestrator | 2025-04-10 01:00:39.341701 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-04-10 01:00:39.341715 | orchestrator | Thursday 10 April 2025 00:57:39 +0000 (0:00:00.438) 0:00:38.337 ******** 2025-04-10 01:00:39.341730 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-04-10 01:00:39.341744 | orchestrator | ...ignoring 2025-04-10 01:00:39.341759 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-04-10 01:00:39.341773 | orchestrator | ...ignoring 2025-04-10 01:00:39.341787 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-04-10 01:00:39.341801 | orchestrator | ...ignoring 2025-04-10 01:00:39.341815 | orchestrator | 2025-04-10 01:00:39.341830 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-04-10 01:00:39.341882 | orchestrator | Thursday 10 April 2025 00:57:50 +0000 (0:00:10.894) 0:00:49.232 ******** 2025-04-10 01:00:39.341910 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.341934 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.341949 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.341963 | orchestrator | 2025-04-10 01:00:39.341982 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-04-10 01:00:39.341997 | orchestrator | Thursday 10 April 2025 00:57:51 +0000 (0:00:00.682) 0:00:49.914 ******** 2025-04-10 01:00:39.342011 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.342086 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.342116 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.342130 | orchestrator | 2025-04-10 01:00:39.342145 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-04-10 01:00:39.342158 | orchestrator | Thursday 10 April 2025 00:57:52 +0000 (0:00:00.807) 0:00:50.721 ******** 2025-04-10 01:00:39.342173 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.342187 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.342200 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.342214 | orchestrator | 2025-04-10 01:00:39.342237 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-04-10 01:00:39.342252 | orchestrator | Thursday 10 April 2025 00:57:52 +0000 (0:00:00.518) 0:00:51.240 ******** 2025-04-10 01:00:39.342266 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.342280 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.342294 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.342308 | orchestrator | 2025-04-10 01:00:39.342322 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-04-10 01:00:39.342337 | orchestrator | Thursday 10 April 2025 00:57:53 +0000 (0:00:00.605) 0:00:51.846 ******** 2025-04-10 01:00:39.342351 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.342365 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.342379 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.342393 | orchestrator | 2025-04-10 01:00:39.342407 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-04-10 01:00:39.342421 | orchestrator | Thursday 10 April 2025 00:57:53 +0000 (0:00:00.581) 0:00:52.427 ******** 2025-04-10 01:00:39.342435 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.342449 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.342463 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.342477 | orchestrator | 2025-04-10 01:00:39.342491 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-10 01:00:39.342505 | orchestrator | Thursday 10 April 2025 00:57:54 +0000 (0:00:00.585) 0:00:53.012 ******** 2025-04-10 01:00:39.342519 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.342533 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.342547 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-04-10 01:00:39.342561 | orchestrator | 2025-04-10 01:00:39.342575 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-04-10 01:00:39.342589 | orchestrator | Thursday 10 April 2025 00:57:54 +0000 (0:00:00.526) 0:00:53.539 ******** 2025-04-10 01:00:39.342603 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.342617 | orchestrator | 2025-04-10 01:00:39.342631 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-04-10 01:00:39.342645 | orchestrator | Thursday 10 April 2025 00:58:06 +0000 (0:00:11.304) 0:01:04.844 ******** 2025-04-10 01:00:39.342658 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.342672 | orchestrator | 2025-04-10 01:00:39.342686 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-10 01:00:39.342700 | orchestrator | Thursday 10 April 2025 00:58:06 +0000 (0:00:00.169) 0:01:05.013 ******** 2025-04-10 01:00:39.342714 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.342728 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.342742 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.342756 | orchestrator | 2025-04-10 01:00:39.342770 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-04-10 01:00:39.342784 | orchestrator | Thursday 10 April 2025 00:58:07 +0000 (0:00:01.122) 0:01:06.136 ******** 2025-04-10 01:00:39.342798 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.342812 | orchestrator | 2025-04-10 01:00:39.342826 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-04-10 01:00:39.342889 | orchestrator | Thursday 10 April 2025 00:58:17 +0000 (0:00:09.833) 0:01:15.969 ******** 2025-04-10 01:00:39.342906 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.342928 | orchestrator | 2025-04-10 01:00:39.342942 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-04-10 01:00:39.342956 | orchestrator | Thursday 10 April 2025 00:58:18 +0000 (0:00:01.600) 0:01:17.569 ******** 2025-04-10 01:00:39.342970 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.342984 | orchestrator | 2025-04-10 01:00:39.342998 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-04-10 01:00:39.343012 | orchestrator | Thursday 10 April 2025 00:58:21 +0000 (0:00:03.035) 0:01:20.605 ******** 2025-04-10 01:00:39.343025 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.343039 | orchestrator | 2025-04-10 01:00:39.343053 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-04-10 01:00:39.343067 | orchestrator | Thursday 10 April 2025 00:58:22 +0000 (0:00:00.122) 0:01:20.727 ******** 2025-04-10 01:00:39.343081 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.343095 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.343109 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.343129 | orchestrator | 2025-04-10 01:00:39.343144 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-04-10 01:00:39.343158 | orchestrator | Thursday 10 April 2025 00:58:22 +0000 (0:00:00.489) 0:01:21.216 ******** 2025-04-10 01:00:39.343172 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.343186 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:39.343200 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:39.343214 | orchestrator | 2025-04-10 01:00:39.343228 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-04-10 01:00:39.343247 | orchestrator | Thursday 10 April 2025 00:58:23 +0000 (0:00:00.482) 0:01:21.699 ******** 2025-04-10 01:00:39.343261 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-10 01:00:39.343275 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.343289 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:39.343303 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:39.343318 | orchestrator | 2025-04-10 01:00:39.343332 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-10 01:00:39.343346 | orchestrator | skipping: no hosts matched 2025-04-10 01:00:39.343360 | orchestrator | 2025-04-10 01:00:39.343373 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-10 01:00:39.343387 | orchestrator | 2025-04-10 01:00:39.343401 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-10 01:00:39.343415 | orchestrator | Thursday 10 April 2025 00:58:38 +0000 (0:00:15.223) 0:01:36.922 ******** 2025-04-10 01:00:39.343429 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:00:39.343443 | orchestrator | 2025-04-10 01:00:39.343457 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-10 01:00:39.343470 | orchestrator | Thursday 10 April 2025 00:58:55 +0000 (0:00:17.328) 0:01:54.251 ******** 2025-04-10 01:00:39.343490 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.343505 | orchestrator | 2025-04-10 01:00:39.343519 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-10 01:00:39.343533 | orchestrator | Thursday 10 April 2025 00:59:16 +0000 (0:00:20.588) 0:02:14.839 ******** 2025-04-10 01:00:39.343547 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.343561 | orchestrator | 2025-04-10 01:00:39.343575 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-10 01:00:39.343589 | orchestrator | 2025-04-10 01:00:39.343603 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-10 01:00:39.343617 | orchestrator | Thursday 10 April 2025 00:59:19 +0000 (0:00:02.952) 0:02:17.791 ******** 2025-04-10 01:00:39.343631 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:00:39.343645 | orchestrator | 2025-04-10 01:00:39.343659 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-10 01:00:39.343673 | orchestrator | Thursday 10 April 2025 00:59:40 +0000 (0:00:21.461) 0:02:39.253 ******** 2025-04-10 01:00:39.343687 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.343707 | orchestrator | 2025-04-10 01:00:39.343721 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-10 01:00:39.343735 | orchestrator | Thursday 10 April 2025 00:59:56 +0000 (0:00:15.547) 0:02:54.800 ******** 2025-04-10 01:00:39.343749 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.343763 | orchestrator | 2025-04-10 01:00:39.343777 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-10 01:00:39.343791 | orchestrator | 2025-04-10 01:00:39.343805 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-10 01:00:39.343819 | orchestrator | Thursday 10 April 2025 00:59:58 +0000 (0:00:02.779) 0:02:57.580 ******** 2025-04-10 01:00:39.343833 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.343863 | orchestrator | 2025-04-10 01:00:39.343877 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-10 01:00:39.343892 | orchestrator | Thursday 10 April 2025 01:00:13 +0000 (0:00:14.523) 0:03:12.103 ******** 2025-04-10 01:00:39.343906 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.343920 | orchestrator | 2025-04-10 01:00:39.343933 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-10 01:00:39.343947 | orchestrator | Thursday 10 April 2025 01:00:18 +0000 (0:00:04.616) 0:03:16.720 ******** 2025-04-10 01:00:39.343961 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.343975 | orchestrator | 2025-04-10 01:00:39.343990 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-10 01:00:39.344003 | orchestrator | 2025-04-10 01:00:39.344017 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-10 01:00:39.344031 | orchestrator | Thursday 10 April 2025 01:00:20 +0000 (0:00:02.855) 0:03:19.575 ******** 2025-04-10 01:00:39.344045 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:00:39.344059 | orchestrator | 2025-04-10 01:00:39.344073 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-04-10 01:00:39.344087 | orchestrator | Thursday 10 April 2025 01:00:21 +0000 (0:00:00.835) 0:03:20.410 ******** 2025-04-10 01:00:39.344100 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.344114 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.344128 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.344142 | orchestrator | 2025-04-10 01:00:39.344156 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-04-10 01:00:39.344170 | orchestrator | Thursday 10 April 2025 01:00:24 +0000 (0:00:02.714) 0:03:23.125 ******** 2025-04-10 01:00:39.344184 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.344198 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.344211 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.344225 | orchestrator | 2025-04-10 01:00:39.344245 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-04-10 01:00:39.344259 | orchestrator | Thursday 10 April 2025 01:00:26 +0000 (0:00:02.290) 0:03:25.415 ******** 2025-04-10 01:00:39.344273 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.344287 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.344301 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.344315 | orchestrator | 2025-04-10 01:00:39.344329 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-04-10 01:00:39.344343 | orchestrator | Thursday 10 April 2025 01:00:29 +0000 (0:00:02.498) 0:03:27.913 ******** 2025-04-10 01:00:39.344357 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.344371 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.344385 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:00:39.344399 | orchestrator | 2025-04-10 01:00:39.344413 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-04-10 01:00:39.344427 | orchestrator | Thursday 10 April 2025 01:00:31 +0000 (0:00:02.395) 0:03:30.309 ******** 2025-04-10 01:00:39.344441 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:00:39.344455 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:00:39.344475 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:00:39.344489 | orchestrator | 2025-04-10 01:00:39.344504 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-10 01:00:39.344517 | orchestrator | Thursday 10 April 2025 01:00:35 +0000 (0:00:03.933) 0:03:34.243 ******** 2025-04-10 01:00:39.344532 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:00:39.344545 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:00:39.344559 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:00:39.344573 | orchestrator | 2025-04-10 01:00:39.344587 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:00:39.344601 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-10 01:00:39.344616 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-04-10 01:00:39.344637 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-10 01:00:42.386713 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-10 01:00:42.386893 | orchestrator | 2025-04-10 01:00:42.386917 | orchestrator | 2025-04-10 01:00:42.387046 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:00:42.387072 | orchestrator | Thursday 10 April 2025 01:00:35 +0000 (0:00:00.387) 0:03:34.630 ******** 2025-04-10 01:00:42.387087 | orchestrator | =============================================================================== 2025-04-10 01:00:42.387102 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.79s 2025-04-10 01:00:42.387116 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.14s 2025-04-10 01:00:42.387130 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 15.22s 2025-04-10 01:00:42.387145 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.52s 2025-04-10 01:00:42.387159 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.30s 2025-04-10 01:00:42.387173 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-04-10 01:00:42.387188 | orchestrator | mariadb : Copying over galera.cnf -------------------------------------- 10.86s 2025-04-10 01:00:42.387202 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.83s 2025-04-10 01:00:42.387216 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.97s 2025-04-10 01:00:42.387230 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.73s 2025-04-10 01:00:42.387245 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.99s 2025-04-10 01:00:42.387259 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.62s 2025-04-10 01:00:42.387273 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.42s 2025-04-10 01:00:42.387287 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.93s 2025-04-10 01:00:42.387301 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.04s 2025-04-10 01:00:42.387315 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.86s 2025-04-10 01:00:42.387329 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.71s 2025-04-10 01:00:42.387343 | orchestrator | Check MariaDB service --------------------------------------------------- 2.55s 2025-04-10 01:00:42.387357 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.50s 2025-04-10 01:00:42.387372 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.40s 2025-04-10 01:00:42.387386 | orchestrator | 2025-04-10 01:00:39 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:42.387430 | orchestrator | 2025-04-10 01:00:39 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:42.387445 | orchestrator | 2025-04-10 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:42.387479 | orchestrator | 2025-04-10 01:00:42 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:42.389733 | orchestrator | 2025-04-10 01:00:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:42.389793 | orchestrator | 2025-04-10 01:00:42 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:42.393209 | orchestrator | 2025-04-10 01:00:42 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:45.437515 | orchestrator | 2025-04-10 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:45.437629 | orchestrator | 2025-04-10 01:00:45 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:45.437948 | orchestrator | 2025-04-10 01:00:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:45.438685 | orchestrator | 2025-04-10 01:00:45 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:45.439038 | orchestrator | 2025-04-10 01:00:45 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:48.489001 | orchestrator | 2025-04-10 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:48.489143 | orchestrator | 2025-04-10 01:00:48 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:48.490946 | orchestrator | 2025-04-10 01:00:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:48.493113 | orchestrator | 2025-04-10 01:00:48 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:48.494411 | orchestrator | 2025-04-10 01:00:48 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:51.560180 | orchestrator | 2025-04-10 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:51.560322 | orchestrator | 2025-04-10 01:00:51 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:51.562075 | orchestrator | 2025-04-10 01:00:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:51.568115 | orchestrator | 2025-04-10 01:00:51 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:51.568321 | orchestrator | 2025-04-10 01:00:51 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:54.615110 | orchestrator | 2025-04-10 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:54.615251 | orchestrator | 2025-04-10 01:00:54 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:54.616927 | orchestrator | 2025-04-10 01:00:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:54.617902 | orchestrator | 2025-04-10 01:00:54 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:54.618943 | orchestrator | 2025-04-10 01:00:54 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:00:57.676172 | orchestrator | 2025-04-10 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:00:57.676337 | orchestrator | 2025-04-10 01:00:57 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:00:57.676660 | orchestrator | 2025-04-10 01:00:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:00:57.676697 | orchestrator | 2025-04-10 01:00:57 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:00:57.677370 | orchestrator | 2025-04-10 01:00:57 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:00.730916 | orchestrator | 2025-04-10 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:00.731086 | orchestrator | 2025-04-10 01:01:00 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:00.732793 | orchestrator | 2025-04-10 01:01:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:00.733439 | orchestrator | 2025-04-10 01:01:00 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:00.734753 | orchestrator | 2025-04-10 01:01:00 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:00.735675 | orchestrator | 2025-04-10 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:03.797992 | orchestrator | 2025-04-10 01:01:03 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:03.799285 | orchestrator | 2025-04-10 01:01:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:03.799407 | orchestrator | 2025-04-10 01:01:03 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:03.800254 | orchestrator | 2025-04-10 01:01:03 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:06.848095 | orchestrator | 2025-04-10 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:06.848193 | orchestrator | 2025-04-10 01:01:06 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:06.848496 | orchestrator | 2025-04-10 01:01:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:06.849596 | orchestrator | 2025-04-10 01:01:06 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:06.850356 | orchestrator | 2025-04-10 01:01:06 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:09.890277 | orchestrator | 2025-04-10 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:09.890414 | orchestrator | 2025-04-10 01:01:09 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:09.891332 | orchestrator | 2025-04-10 01:01:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:09.891364 | orchestrator | 2025-04-10 01:01:09 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:09.891787 | orchestrator | 2025-04-10 01:01:09 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:09.891991 | orchestrator | 2025-04-10 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:12.956369 | orchestrator | 2025-04-10 01:01:12 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:12.957159 | orchestrator | 2025-04-10 01:01:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:12.957220 | orchestrator | 2025-04-10 01:01:12 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:12.961140 | orchestrator | 2025-04-10 01:01:12 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:16.011805 | orchestrator | 2025-04-10 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:16.011987 | orchestrator | 2025-04-10 01:01:16 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:16.013337 | orchestrator | 2025-04-10 01:01:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:16.014415 | orchestrator | 2025-04-10 01:01:16 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:16.015471 | orchestrator | 2025-04-10 01:01:16 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:19.082521 | orchestrator | 2025-04-10 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:19.082706 | orchestrator | 2025-04-10 01:01:19 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:19.083609 | orchestrator | 2025-04-10 01:01:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:19.083644 | orchestrator | 2025-04-10 01:01:19 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:19.084077 | orchestrator | 2025-04-10 01:01:19 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:22.134259 | orchestrator | 2025-04-10 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:22.134402 | orchestrator | 2025-04-10 01:01:22 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:22.134640 | orchestrator | 2025-04-10 01:01:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:22.134674 | orchestrator | 2025-04-10 01:01:22 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:22.135620 | orchestrator | 2025-04-10 01:01:22 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:25.188234 | orchestrator | 2025-04-10 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:25.188343 | orchestrator | 2025-04-10 01:01:25 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:25.190661 | orchestrator | 2025-04-10 01:01:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:25.192044 | orchestrator | 2025-04-10 01:01:25 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:25.193504 | orchestrator | 2025-04-10 01:01:25 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:28.249209 | orchestrator | 2025-04-10 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:28.249351 | orchestrator | 2025-04-10 01:01:28 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:28.251813 | orchestrator | 2025-04-10 01:01:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:28.253141 | orchestrator | 2025-04-10 01:01:28 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:28.255415 | orchestrator | 2025-04-10 01:01:28 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:28.255702 | orchestrator | 2025-04-10 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:31.299347 | orchestrator | 2025-04-10 01:01:31 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:31.301719 | orchestrator | 2025-04-10 01:01:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:31.304505 | orchestrator | 2025-04-10 01:01:31 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:31.308196 | orchestrator | 2025-04-10 01:01:31 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:34.358304 | orchestrator | 2025-04-10 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:34.358455 | orchestrator | 2025-04-10 01:01:34 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:34.359762 | orchestrator | 2025-04-10 01:01:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:34.359804 | orchestrator | 2025-04-10 01:01:34 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:34.359876 | orchestrator | 2025-04-10 01:01:34 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:37.408140 | orchestrator | 2025-04-10 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:37.408277 | orchestrator | 2025-04-10 01:01:37 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:37.410098 | orchestrator | 2025-04-10 01:01:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:37.412657 | orchestrator | 2025-04-10 01:01:37 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:37.415774 | orchestrator | 2025-04-10 01:01:37 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:37.416206 | orchestrator | 2025-04-10 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:40.460545 | orchestrator | 2025-04-10 01:01:40 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:40.461883 | orchestrator | 2025-04-10 01:01:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:40.463234 | orchestrator | 2025-04-10 01:01:40 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:40.464773 | orchestrator | 2025-04-10 01:01:40 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:43.522336 | orchestrator | 2025-04-10 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:43.522479 | orchestrator | 2025-04-10 01:01:43 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:43.524255 | orchestrator | 2025-04-10 01:01:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:43.527336 | orchestrator | 2025-04-10 01:01:43 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:43.529355 | orchestrator | 2025-04-10 01:01:43 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:43.530376 | orchestrator | 2025-04-10 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:46.583161 | orchestrator | 2025-04-10 01:01:46 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:46.584103 | orchestrator | 2025-04-10 01:01:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:46.585277 | orchestrator | 2025-04-10 01:01:46 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:46.586552 | orchestrator | 2025-04-10 01:01:46 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:49.632823 | orchestrator | 2025-04-10 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:49.633018 | orchestrator | 2025-04-10 01:01:49 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:49.634266 | orchestrator | 2025-04-10 01:01:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:49.635263 | orchestrator | 2025-04-10 01:01:49 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:49.636596 | orchestrator | 2025-04-10 01:01:49 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:52.692132 | orchestrator | 2025-04-10 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:52.692292 | orchestrator | 2025-04-10 01:01:52 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:52.692593 | orchestrator | 2025-04-10 01:01:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:52.694129 | orchestrator | 2025-04-10 01:01:52 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:52.695738 | orchestrator | 2025-04-10 01:01:52 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:55.748614 | orchestrator | 2025-04-10 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:55.748754 | orchestrator | 2025-04-10 01:01:55 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:55.757433 | orchestrator | 2025-04-10 01:01:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:55.758445 | orchestrator | 2025-04-10 01:01:55 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:55.763007 | orchestrator | 2025-04-10 01:01:55 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:01:58.816612 | orchestrator | 2025-04-10 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:01:58.816751 | orchestrator | 2025-04-10 01:01:58 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:01:58.817122 | orchestrator | 2025-04-10 01:01:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:01:58.819194 | orchestrator | 2025-04-10 01:01:58 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:01:58.821578 | orchestrator | 2025-04-10 01:01:58 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:01.864170 | orchestrator | 2025-04-10 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:01.864302 | orchestrator | 2025-04-10 01:02:01 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:01.864618 | orchestrator | 2025-04-10 01:02:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:01.866116 | orchestrator | 2025-04-10 01:02:01 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:01.867520 | orchestrator | 2025-04-10 01:02:01 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:04.909030 | orchestrator | 2025-04-10 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:04.909213 | orchestrator | 2025-04-10 01:02:04 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:04.910246 | orchestrator | 2025-04-10 01:02:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:04.910283 | orchestrator | 2025-04-10 01:02:04 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:04.911728 | orchestrator | 2025-04-10 01:02:04 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:07.972550 | orchestrator | 2025-04-10 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:07.972767 | orchestrator | 2025-04-10 01:02:07 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:07.975595 | orchestrator | 2025-04-10 01:02:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:07.979008 | orchestrator | 2025-04-10 01:02:07 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:07.980475 | orchestrator | 2025-04-10 01:02:07 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:11.036298 | orchestrator | 2025-04-10 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:11.036443 | orchestrator | 2025-04-10 01:02:11 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:11.037391 | orchestrator | 2025-04-10 01:02:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:11.039235 | orchestrator | 2025-04-10 01:02:11 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:11.040986 | orchestrator | 2025-04-10 01:02:11 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:11.041156 | orchestrator | 2025-04-10 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:14.080409 | orchestrator | 2025-04-10 01:02:14 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:14.081993 | orchestrator | 2025-04-10 01:02:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:14.084049 | orchestrator | 2025-04-10 01:02:14 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:14.084085 | orchestrator | 2025-04-10 01:02:14 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:17.139176 | orchestrator | 2025-04-10 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:17.139320 | orchestrator | 2025-04-10 01:02:17 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:17.140499 | orchestrator | 2025-04-10 01:02:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:17.142545 | orchestrator | 2025-04-10 01:02:17 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:17.145479 | orchestrator | 2025-04-10 01:02:17 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:17.145644 | orchestrator | 2025-04-10 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:20.195927 | orchestrator | 2025-04-10 01:02:20 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state STARTED 2025-04-10 01:02:20.199906 | orchestrator | 2025-04-10 01:02:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:20.201445 | orchestrator | 2025-04-10 01:02:20 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:20.202600 | orchestrator | 2025-04-10 01:02:20 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:20.202993 | orchestrator | 2025-04-10 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:23.252903 | orchestrator | 2025-04-10 01:02:23 | INFO  | Task c26d11bf-a965-42cc-b013-e0eae6aa2802 is in state SUCCESS 2025-04-10 01:02:23.254147 | orchestrator | 2025-04-10 01:02:23.254196 | orchestrator | 2025-04-10 01:02:23.254212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:02:23.254227 | orchestrator | 2025-04-10 01:02:23.254242 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:02:23.254564 | orchestrator | Thursday 10 April 2025 01:00:39 +0000 (0:00:00.412) 0:00:00.412 ******** 2025-04-10 01:02:23.254585 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.254602 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.254616 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.254630 | orchestrator | 2025-04-10 01:02:23.254645 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:02:23.254659 | orchestrator | Thursday 10 April 2025 01:00:40 +0000 (0:00:00.603) 0:00:01.015 ******** 2025-04-10 01:02:23.254673 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-04-10 01:02:23.254687 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-04-10 01:02:23.254701 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-04-10 01:02:23.254715 | orchestrator | 2025-04-10 01:02:23.254729 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-04-10 01:02:23.254744 | orchestrator | 2025-04-10 01:02:23.254757 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-10 01:02:23.254772 | orchestrator | Thursday 10 April 2025 01:00:40 +0000 (0:00:00.378) 0:00:01.393 ******** 2025-04-10 01:02:23.254786 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:02:23.254801 | orchestrator | 2025-04-10 01:02:23.254815 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-04-10 01:02:23.254829 | orchestrator | Thursday 10 April 2025 01:00:41 +0000 (0:00:00.867) 0:00:02.261 ******** 2025-04-10 01:02:23.254873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.254907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.254937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.254975 | orchestrator | 2025-04-10 01:02:23.254990 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-04-10 01:02:23.255005 | orchestrator | Thursday 10 April 2025 01:00:43 +0000 (0:00:02.122) 0:00:04.384 ******** 2025-04-10 01:02:23.255019 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.255033 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.255048 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.255062 | orchestrator | 2025-04-10 01:02:23.255076 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-10 01:02:23.255091 | orchestrator | Thursday 10 April 2025 01:00:44 +0000 (0:00:00.321) 0:00:04.705 ******** 2025-04-10 01:02:23.255113 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-10 01:02:23.255129 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-04-10 01:02:23.255143 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-04-10 01:02:23.255160 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-04-10 01:02:23.255175 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-04-10 01:02:23.255192 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-04-10 01:02:23.255208 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-04-10 01:02:23.255224 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-10 01:02:23.255239 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-04-10 01:02:23.255255 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-04-10 01:02:23.255271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-04-10 01:02:23.255287 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-04-10 01:02:23.255303 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-04-10 01:02:23.255319 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-04-10 01:02:23.255335 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-10 01:02:23.255351 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-04-10 01:02:23.255367 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-04-10 01:02:23.255389 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-04-10 01:02:23.255406 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-04-10 01:02:23.255423 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-04-10 01:02:23.255438 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-04-10 01:02:23.255456 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-04-10 01:02:23.255478 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-04-10 01:02:23.255495 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-04-10 01:02:23.255510 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-04-10 01:02:23.255524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-04-10 01:02:23.255548 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-04-10 01:02:23.255563 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-04-10 01:02:23.255577 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-04-10 01:02:23.255591 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-04-10 01:02:23.255606 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-04-10 01:02:23.255620 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-04-10 01:02:23.255634 | orchestrator | 2025-04-10 01:02:23.255648 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.255662 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.977) 0:00:05.683 ******** 2025-04-10 01:02:23.255676 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.255690 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.255704 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.255719 | orchestrator | 2025-04-10 01:02:23.255733 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.255747 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.533) 0:00:06.217 ******** 2025-04-10 01:02:23.255761 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.255776 | orchestrator | 2025-04-10 01:02:23.255796 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.255811 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.120) 0:00:06.337 ******** 2025-04-10 01:02:23.255825 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.255862 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.255878 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.255892 | orchestrator | 2025-04-10 01:02:23.255906 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.255920 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.452) 0:00:06.790 ******** 2025-04-10 01:02:23.255934 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.255948 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.255962 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.255976 | orchestrator | 2025-04-10 01:02:23.255990 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.256004 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.352) 0:00:07.142 ******** 2025-04-10 01:02:23.256018 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256038 | orchestrator | 2025-04-10 01:02:23.256052 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.256066 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.319) 0:00:07.462 ******** 2025-04-10 01:02:23.256080 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256094 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.256108 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.256122 | orchestrator | 2025-04-10 01:02:23.256136 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.256150 | orchestrator | Thursday 10 April 2025 01:00:47 +0000 (0:00:00.448) 0:00:07.911 ******** 2025-04-10 01:02:23.256164 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.256178 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.256192 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.256206 | orchestrator | 2025-04-10 01:02:23.256220 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.256241 | orchestrator | Thursday 10 April 2025 01:00:47 +0000 (0:00:00.652) 0:00:08.563 ******** 2025-04-10 01:02:23.256255 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256269 | orchestrator | 2025-04-10 01:02:23.256283 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.256297 | orchestrator | Thursday 10 April 2025 01:00:48 +0000 (0:00:00.186) 0:00:08.749 ******** 2025-04-10 01:02:23.256311 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256325 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.256339 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.256353 | orchestrator | 2025-04-10 01:02:23.256367 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.256381 | orchestrator | Thursday 10 April 2025 01:00:48 +0000 (0:00:00.562) 0:00:09.312 ******** 2025-04-10 01:02:23.256395 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.256409 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.256423 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.256437 | orchestrator | 2025-04-10 01:02:23.256451 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.256465 | orchestrator | Thursday 10 April 2025 01:00:49 +0000 (0:00:00.483) 0:00:09.796 ******** 2025-04-10 01:02:23.256479 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256493 | orchestrator | 2025-04-10 01:02:23.256507 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.256520 | orchestrator | Thursday 10 April 2025 01:00:49 +0000 (0:00:00.124) 0:00:09.920 ******** 2025-04-10 01:02:23.256534 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256548 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.256562 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.256576 | orchestrator | 2025-04-10 01:02:23.256590 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.256604 | orchestrator | Thursday 10 April 2025 01:00:49 +0000 (0:00:00.477) 0:00:10.397 ******** 2025-04-10 01:02:23.256618 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.256632 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.256646 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.256660 | orchestrator | 2025-04-10 01:02:23.256674 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.256688 | orchestrator | Thursday 10 April 2025 01:00:50 +0000 (0:00:00.349) 0:00:10.747 ******** 2025-04-10 01:02:23.256702 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256716 | orchestrator | 2025-04-10 01:02:23.256735 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.256749 | orchestrator | Thursday 10 April 2025 01:00:50 +0000 (0:00:00.553) 0:00:11.300 ******** 2025-04-10 01:02:23.256763 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.256777 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.256792 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.256806 | orchestrator | 2025-04-10 01:02:23.256820 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.256834 | orchestrator | Thursday 10 April 2025 01:00:51 +0000 (0:00:00.433) 0:00:11.734 ******** 2025-04-10 01:02:23.256913 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.256928 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.256942 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.256957 | orchestrator | 2025-04-10 01:02:23.256971 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.256985 | orchestrator | Thursday 10 April 2025 01:00:51 +0000 (0:00:00.586) 0:00:12.320 ******** 2025-04-10 01:02:23.256999 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257013 | orchestrator | 2025-04-10 01:02:23.257027 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.257041 | orchestrator | Thursday 10 April 2025 01:00:51 +0000 (0:00:00.241) 0:00:12.562 ******** 2025-04-10 01:02:23.257063 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257077 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.257091 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.257105 | orchestrator | 2025-04-10 01:02:23.257119 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.257134 | orchestrator | Thursday 10 April 2025 01:00:52 +0000 (0:00:00.734) 0:00:13.297 ******** 2025-04-10 01:02:23.257154 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.257169 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.257183 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.257197 | orchestrator | 2025-04-10 01:02:23.257212 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.257225 | orchestrator | Thursday 10 April 2025 01:00:53 +0000 (0:00:00.463) 0:00:13.760 ******** 2025-04-10 01:02:23.257237 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257250 | orchestrator | 2025-04-10 01:02:23.257263 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.257275 | orchestrator | Thursday 10 April 2025 01:00:53 +0000 (0:00:00.139) 0:00:13.900 ******** 2025-04-10 01:02:23.257288 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257300 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.257313 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.257325 | orchestrator | 2025-04-10 01:02:23.257338 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.257350 | orchestrator | Thursday 10 April 2025 01:00:53 +0000 (0:00:00.631) 0:00:14.531 ******** 2025-04-10 01:02:23.257362 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.257375 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.257387 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.257399 | orchestrator | 2025-04-10 01:02:23.257412 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.257424 | orchestrator | Thursday 10 April 2025 01:00:54 +0000 (0:00:00.468) 0:00:15.000 ******** 2025-04-10 01:02:23.257437 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257449 | orchestrator | 2025-04-10 01:02:23.257462 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.257474 | orchestrator | Thursday 10 April 2025 01:00:54 +0000 (0:00:00.140) 0:00:15.140 ******** 2025-04-10 01:02:23.257486 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257499 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.257511 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.257524 | orchestrator | 2025-04-10 01:02:23.257536 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.257549 | orchestrator | Thursday 10 April 2025 01:00:54 +0000 (0:00:00.303) 0:00:15.444 ******** 2025-04-10 01:02:23.257561 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.257574 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.257586 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.257599 | orchestrator | 2025-04-10 01:02:23.257611 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.257624 | orchestrator | Thursday 10 April 2025 01:00:55 +0000 (0:00:00.495) 0:00:15.939 ******** 2025-04-10 01:02:23.257636 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257649 | orchestrator | 2025-04-10 01:02:23.257755 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.257771 | orchestrator | Thursday 10 April 2025 01:00:55 +0000 (0:00:00.151) 0:00:16.091 ******** 2025-04-10 01:02:23.257783 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257796 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.257809 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.257831 | orchestrator | 2025-04-10 01:02:23.257864 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.257877 | orchestrator | Thursday 10 April 2025 01:00:56 +0000 (0:00:00.512) 0:00:16.604 ******** 2025-04-10 01:02:23.257890 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.257912 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.257925 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.257937 | orchestrator | 2025-04-10 01:02:23.257950 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.257962 | orchestrator | Thursday 10 April 2025 01:00:56 +0000 (0:00:00.445) 0:00:17.049 ******** 2025-04-10 01:02:23.257975 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.257987 | orchestrator | 2025-04-10 01:02:23.258000 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.258012 | orchestrator | Thursday 10 April 2025 01:00:56 +0000 (0:00:00.131) 0:00:17.180 ******** 2025-04-10 01:02:23.258055 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.258069 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.258081 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.258093 | orchestrator | 2025-04-10 01:02:23.258117 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-10 01:02:23.258130 | orchestrator | Thursday 10 April 2025 01:00:57 +0000 (0:00:00.619) 0:00:17.799 ******** 2025-04-10 01:02:23.258142 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:02:23.258155 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:02:23.258167 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:02:23.258180 | orchestrator | 2025-04-10 01:02:23.258192 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-10 01:02:23.258205 | orchestrator | Thursday 10 April 2025 01:00:57 +0000 (0:00:00.534) 0:00:18.334 ******** 2025-04-10 01:02:23.258217 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.258230 | orchestrator | 2025-04-10 01:02:23.258242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-10 01:02:23.258254 | orchestrator | Thursday 10 April 2025 01:00:57 +0000 (0:00:00.125) 0:00:18.460 ******** 2025-04-10 01:02:23.258267 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.258280 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.258292 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.258304 | orchestrator | 2025-04-10 01:02:23.258317 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-04-10 01:02:23.258329 | orchestrator | Thursday 10 April 2025 01:00:58 +0000 (0:00:00.538) 0:00:18.999 ******** 2025-04-10 01:02:23.258342 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:02:23.258354 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:02:23.258366 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:02:23.258379 | orchestrator | 2025-04-10 01:02:23.258394 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-04-10 01:02:23.258409 | orchestrator | Thursday 10 April 2025 01:01:00 +0000 (0:00:02.466) 0:00:21.465 ******** 2025-04-10 01:02:23.258423 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-10 01:02:23.258444 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-10 01:02:23.258458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-10 01:02:23.258472 | orchestrator | 2025-04-10 01:02:23.258487 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-04-10 01:02:23.258500 | orchestrator | Thursday 10 April 2025 01:01:03 +0000 (0:00:02.274) 0:00:23.739 ******** 2025-04-10 01:02:23.258515 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-10 01:02:23.258529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-10 01:02:23.258543 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-10 01:02:23.258556 | orchestrator | 2025-04-10 01:02:23.258570 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-04-10 01:02:23.258585 | orchestrator | Thursday 10 April 2025 01:01:05 +0000 (0:00:02.718) 0:00:26.458 ******** 2025-04-10 01:02:23.258606 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-10 01:02:23.258620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-10 01:02:23.258639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-10 01:02:23.258654 | orchestrator | 2025-04-10 01:02:23.258668 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-04-10 01:02:23.258681 | orchestrator | Thursday 10 April 2025 01:01:08 +0000 (0:00:02.227) 0:00:28.686 ******** 2025-04-10 01:02:23.258694 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.258706 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.258719 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.258731 | orchestrator | 2025-04-10 01:02:23.258744 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-04-10 01:02:23.258756 | orchestrator | Thursday 10 April 2025 01:01:08 +0000 (0:00:00.372) 0:00:29.058 ******** 2025-04-10 01:02:23.258769 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.258781 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.258793 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.258806 | orchestrator | 2025-04-10 01:02:23.258818 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-10 01:02:23.258830 | orchestrator | Thursday 10 April 2025 01:01:08 +0000 (0:00:00.441) 0:00:29.499 ******** 2025-04-10 01:02:23.258858 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:02:23.258872 | orchestrator | 2025-04-10 01:02:23.258884 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-04-10 01:02:23.258897 | orchestrator | Thursday 10 April 2025 01:01:09 +0000 (0:00:00.763) 0:00:30.263 ******** 2025-04-10 01:02:23.258917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.258943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.258964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.258988 | orchestrator | 2025-04-10 01:02:23.259002 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-04-10 01:02:23.259014 | orchestrator | Thursday 10 April 2025 01:01:11 +0000 (0:00:02.181) 0:00:32.445 ******** 2025-04-10 01:02:23.259027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 01:02:23.259045 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.259065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 01:02:23.259085 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.259098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 01:02:23.259115 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.259128 | orchestrator | 2025-04-10 01:02:23.259141 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-04-10 01:02:23.259153 | orchestrator | Thursday 10 April 2025 01:01:12 +0000 (0:00:01.052) 0:00:33.498 ******** 2025-04-10 01:02:23.259174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 01:02:23.259193 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.259206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 01:02:23.259224 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.259245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-10 01:02:23.259269 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.259282 | orchestrator | 2025-04-10 01:02:23.259295 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-04-10 01:02:23.259307 | orchestrator | Thursday 10 April 2025 01:01:14 +0000 (0:00:01.796) 0:00:35.294 ******** 2025-04-10 01:02:23.259326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.259353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.259374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-10 01:02:23.259394 | orchestrator | 2025-04-10 01:02:23.259407 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-10 01:02:23.259420 | orchestrator | Thursday 10 April 2025 01:01:21 +0000 (0:00:06.551) 0:00:41.845 ******** 2025-04-10 01:02:23.259432 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:02:23.259445 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:02:23.259457 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:02:23.259470 | orchestrator | 2025-04-10 01:02:23.259483 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-10 01:02:23.259495 | orchestrator | Thursday 10 April 2025 01:01:21 +0000 (0:00:00.502) 0:00:42.348 ******** 2025-04-10 01:02:23.259508 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:02:23.259520 | orchestrator | 2025-04-10 01:02:23.259537 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-04-10 01:02:23.259550 | orchestrator | Thursday 10 April 2025 01:01:22 +0000 (0:00:00.915) 0:00:43.263 ******** 2025-04-10 01:02:23.259563 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:02:23.259575 | orchestrator | 2025-04-10 01:02:23.259588 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-04-10 01:02:23.259600 | orchestrator | Thursday 10 April 2025 01:01:25 +0000 (0:00:02.696) 0:00:45.960 ******** 2025-04-10 01:02:23.259612 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:02:23.259625 | orchestrator | 2025-04-10 01:02:23.259637 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-04-10 01:02:23.259650 | orchestrator | Thursday 10 April 2025 01:01:27 +0000 (0:00:02.289) 0:00:48.249 ******** 2025-04-10 01:02:23.259662 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:02:23.259674 | orchestrator | 2025-04-10 01:02:23.259687 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-10 01:02:23.259699 | orchestrator | Thursday 10 April 2025 01:01:42 +0000 (0:00:14.558) 0:01:02.807 ******** 2025-04-10 01:02:23.259712 | orchestrator | 2025-04-10 01:02:23.259724 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-10 01:02:23.259737 | orchestrator | Thursday 10 April 2025 01:01:42 +0000 (0:00:00.056) 0:01:02.864 ******** 2025-04-10 01:02:23.259749 | orchestrator | 2025-04-10 01:02:23.259761 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-10 01:02:23.259774 | orchestrator | Thursday 10 April 2025 01:01:42 +0000 (0:00:00.193) 0:01:03.058 ******** 2025-04-10 01:02:23.259786 | orchestrator | 2025-04-10 01:02:23.259798 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-04-10 01:02:23.259811 | orchestrator | Thursday 10 April 2025 01:01:42 +0000 (0:00:00.061) 0:01:03.120 ******** 2025-04-10 01:02:23.259823 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:02:23.259850 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:02:23.259864 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:02:23.259876 | orchestrator | 2025-04-10 01:02:23.259889 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:02:23.259901 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-10 01:02:23.259921 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-10 01:02:23.259934 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-10 01:02:23.260038 | orchestrator | 2025-04-10 01:02:23.260056 | orchestrator | 2025-04-10 01:02:23.260068 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:02:23.260081 | orchestrator | Thursday 10 April 2025 01:02:20 +0000 (0:00:38.153) 0:01:41.273 ******** 2025-04-10 01:02:23.260093 | orchestrator | =============================================================================== 2025-04-10 01:02:23.260106 | orchestrator | horizon : Restart horizon container ------------------------------------ 38.15s 2025-04-10 01:02:23.260118 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.56s 2025-04-10 01:02:23.260131 | orchestrator | horizon : Deploy horizon container -------------------------------------- 6.55s 2025-04-10 01:02:23.260143 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.72s 2025-04-10 01:02:23.260156 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.70s 2025-04-10 01:02:23.260168 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.47s 2025-04-10 01:02:23.260181 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.29s 2025-04-10 01:02:23.260193 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.27s 2025-04-10 01:02:23.260206 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.23s 2025-04-10 01:02:23.260218 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.18s 2025-04-10 01:02:23.260231 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 2.12s 2025-04-10 01:02:23.260243 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.80s 2025-04-10 01:02:23.260256 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.05s 2025-04-10 01:02:23.260280 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.98s 2025-04-10 01:02:23.261868 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.92s 2025-04-10 01:02:23.261887 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.87s 2025-04-10 01:02:23.261897 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-04-10 01:02:23.261908 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.73s 2025-04-10 01:02:23.261918 | orchestrator | horizon : Update policy file name --------------------------------------- 0.65s 2025-04-10 01:02:23.261928 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.63s 2025-04-10 01:02:23.261938 | orchestrator | 2025-04-10 01:02:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:23.261949 | orchestrator | 2025-04-10 01:02:23 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:23.261963 | orchestrator | 2025-04-10 01:02:23 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:26.307377 | orchestrator | 2025-04-10 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:26.307525 | orchestrator | 2025-04-10 01:02:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:26.308495 | orchestrator | 2025-04-10 01:02:26 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:26.309667 | orchestrator | 2025-04-10 01:02:26 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state STARTED 2025-04-10 01:02:29.361153 | orchestrator | 2025-04-10 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:29.361325 | orchestrator | 2025-04-10 01:02:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:29.364978 | orchestrator | 2025-04-10 01:02:29.365026 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-10 01:02:29.365352 | orchestrator | 2025-04-10 01:02:29.365370 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-04-10 01:02:29.365385 | orchestrator | 2025-04-10 01:02:29.365399 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-10 01:02:29.365413 | orchestrator | Thursday 10 April 2025 01:00:14 +0000 (0:00:01.321) 0:00:01.321 ******** 2025-04-10 01:02:29.365428 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:02:29.365443 | orchestrator | 2025-04-10 01:02:29.365458 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-10 01:02:29.365471 | orchestrator | Thursday 10 April 2025 01:00:14 +0000 (0:00:00.569) 0:00:01.890 ******** 2025-04-10 01:02:29.365486 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-10 01:02:29.365502 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-10 01:02:29.365516 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-10 01:02:29.365530 | orchestrator | 2025-04-10 01:02:29.365544 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-10 01:02:29.365558 | orchestrator | Thursday 10 April 2025 01:00:15 +0000 (0:00:00.879) 0:00:02.770 ******** 2025-04-10 01:02:29.365572 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:02:29.365586 | orchestrator | 2025-04-10 01:02:29.365600 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-10 01:02:29.365614 | orchestrator | Thursday 10 April 2025 01:00:16 +0000 (0:00:00.726) 0:00:03.496 ******** 2025-04-10 01:02:29.365628 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.365643 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.365657 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.365671 | orchestrator | 2025-04-10 01:02:29.365685 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-10 01:02:29.365699 | orchestrator | Thursday 10 April 2025 01:00:17 +0000 (0:00:00.687) 0:00:04.184 ******** 2025-04-10 01:02:29.365713 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.365727 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.365741 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.365755 | orchestrator | 2025-04-10 01:02:29.365769 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-10 01:02:29.365783 | orchestrator | Thursday 10 April 2025 01:00:17 +0000 (0:00:00.307) 0:00:04.492 ******** 2025-04-10 01:02:29.365797 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.365810 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.365824 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.365867 | orchestrator | 2025-04-10 01:02:29.365883 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-10 01:02:29.365897 | orchestrator | Thursday 10 April 2025 01:00:18 +0000 (0:00:00.888) 0:00:05.381 ******** 2025-04-10 01:02:29.365911 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.365924 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.365938 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.365971 | orchestrator | 2025-04-10 01:02:29.365987 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-10 01:02:29.366004 | orchestrator | Thursday 10 April 2025 01:00:18 +0000 (0:00:00.333) 0:00:05.714 ******** 2025-04-10 01:02:29.366066 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.366086 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.366103 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.366141 | orchestrator | 2025-04-10 01:02:29.366157 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-10 01:02:29.366173 | orchestrator | Thursday 10 April 2025 01:00:18 +0000 (0:00:00.293) 0:00:06.008 ******** 2025-04-10 01:02:29.366188 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.366204 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.366220 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.366236 | orchestrator | 2025-04-10 01:02:29.366252 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-10 01:02:29.366269 | orchestrator | Thursday 10 April 2025 01:00:19 +0000 (0:00:00.320) 0:00:06.329 ******** 2025-04-10 01:02:29.366284 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.366299 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.366313 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.366327 | orchestrator | 2025-04-10 01:02:29.366341 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-10 01:02:29.366355 | orchestrator | Thursday 10 April 2025 01:00:19 +0000 (0:00:00.559) 0:00:06.888 ******** 2025-04-10 01:02:29.366369 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.366383 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.366398 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.366411 | orchestrator | 2025-04-10 01:02:29.366430 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-10 01:02:29.366464 | orchestrator | Thursday 10 April 2025 01:00:20 +0000 (0:00:00.315) 0:00:07.204 ******** 2025-04-10 01:02:29.366490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-10 01:02:29.366514 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:02:29.366538 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:02:29.366563 | orchestrator | 2025-04-10 01:02:29.366585 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-10 01:02:29.366599 | orchestrator | Thursday 10 April 2025 01:00:20 +0000 (0:00:00.757) 0:00:07.961 ******** 2025-04-10 01:02:29.366613 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.366627 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.366641 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.366655 | orchestrator | 2025-04-10 01:02:29.366669 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-10 01:02:29.366683 | orchestrator | Thursday 10 April 2025 01:00:21 +0000 (0:00:00.537) 0:00:08.498 ******** 2025-04-10 01:02:29.366708 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-10 01:02:29.366723 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:02:29.366737 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:02:29.366751 | orchestrator | 2025-04-10 01:02:29.366764 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-10 01:02:29.366778 | orchestrator | Thursday 10 April 2025 01:00:23 +0000 (0:00:02.432) 0:00:10.930 ******** 2025-04-10 01:02:29.366792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:02:29.366806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:02:29.366820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:02:29.366834 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.366872 | orchestrator | 2025-04-10 01:02:29.366886 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-10 01:02:29.366900 | orchestrator | Thursday 10 April 2025 01:00:24 +0000 (0:00:00.485) 0:00:11.416 ******** 2025-04-10 01:02:29.366915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-10 01:02:29.366942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-10 01:02:29.366957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-10 01:02:29.366971 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.366986 | orchestrator | 2025-04-10 01:02:29.367000 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-10 01:02:29.367014 | orchestrator | Thursday 10 April 2025 01:00:24 +0000 (0:00:00.670) 0:00:12.087 ******** 2025-04-10 01:02:29.367029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:02:29.367044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:02:29.367059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:02:29.367073 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367087 | orchestrator | 2025-04-10 01:02:29.367101 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-10 01:02:29.367115 | orchestrator | Thursday 10 April 2025 01:00:25 +0000 (0:00:00.182) 0:00:12.269 ******** 2025-04-10 01:02:29.367131 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '8d3f9f2e9477', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-10 01:00:22.230183', 'end': '2025-04-10 01:00:22.271866', 'delta': '0:00:00.041683', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8d3f9f2e9477'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-10 01:02:29.367160 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'c110dddf57b9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-10 01:00:22.850996', 'end': '2025-04-10 01:00:22.896033', 'delta': '0:00:00.045037', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c110dddf57b9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-10 01:02:29.367178 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'bc01cd4365d9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-10 01:00:23.430994', 'end': '2025-04-10 01:00:23.470185', 'delta': '0:00:00.039191', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc01cd4365d9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-10 01:02:29.367199 | orchestrator | 2025-04-10 01:02:29.367214 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-10 01:02:29.367228 | orchestrator | Thursday 10 April 2025 01:00:25 +0000 (0:00:00.204) 0:00:12.473 ******** 2025-04-10 01:02:29.367242 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.367256 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.367270 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.367284 | orchestrator | 2025-04-10 01:02:29.367298 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-10 01:02:29.367312 | orchestrator | Thursday 10 April 2025 01:00:25 +0000 (0:00:00.491) 0:00:12.965 ******** 2025-04-10 01:02:29.367326 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-10 01:02:29.367340 | orchestrator | 2025-04-10 01:02:29.367354 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-10 01:02:29.367368 | orchestrator | Thursday 10 April 2025 01:00:28 +0000 (0:00:02.417) 0:00:15.382 ******** 2025-04-10 01:02:29.367382 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367396 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.367410 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.367424 | orchestrator | 2025-04-10 01:02:29.367438 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-10 01:02:29.367452 | orchestrator | Thursday 10 April 2025 01:00:28 +0000 (0:00:00.505) 0:00:15.888 ******** 2025-04-10 01:02:29.367466 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367479 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.367493 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.367507 | orchestrator | 2025-04-10 01:02:29.367521 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-10 01:02:29.367535 | orchestrator | Thursday 10 April 2025 01:00:29 +0000 (0:00:00.420) 0:00:16.308 ******** 2025-04-10 01:02:29.367549 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367563 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.367577 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.367591 | orchestrator | 2025-04-10 01:02:29.367605 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-10 01:02:29.367619 | orchestrator | Thursday 10 April 2025 01:00:29 +0000 (0:00:00.299) 0:00:16.608 ******** 2025-04-10 01:02:29.367633 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.367647 | orchestrator | 2025-04-10 01:02:29.367660 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-10 01:02:29.367681 | orchestrator | Thursday 10 April 2025 01:00:29 +0000 (0:00:00.158) 0:00:16.766 ******** 2025-04-10 01:02:29.367696 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367709 | orchestrator | 2025-04-10 01:02:29.367724 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-10 01:02:29.367738 | orchestrator | Thursday 10 April 2025 01:00:29 +0000 (0:00:00.243) 0:00:17.010 ******** 2025-04-10 01:02:29.367752 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367766 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.367780 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.367794 | orchestrator | 2025-04-10 01:02:29.367808 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-10 01:02:29.367822 | orchestrator | Thursday 10 April 2025 01:00:30 +0000 (0:00:00.513) 0:00:17.523 ******** 2025-04-10 01:02:29.367861 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367876 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.367890 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.367904 | orchestrator | 2025-04-10 01:02:29.367918 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-10 01:02:29.367932 | orchestrator | Thursday 10 April 2025 01:00:30 +0000 (0:00:00.347) 0:00:17.871 ******** 2025-04-10 01:02:29.367946 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.367960 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.367974 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.367988 | orchestrator | 2025-04-10 01:02:29.368002 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-10 01:02:29.368016 | orchestrator | Thursday 10 April 2025 01:00:31 +0000 (0:00:00.326) 0:00:18.197 ******** 2025-04-10 01:02:29.368030 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.368044 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.368064 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.368079 | orchestrator | 2025-04-10 01:02:29.368093 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-10 01:02:29.368107 | orchestrator | Thursday 10 April 2025 01:00:31 +0000 (0:00:00.354) 0:00:18.552 ******** 2025-04-10 01:02:29.368121 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.368135 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.368149 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.368162 | orchestrator | 2025-04-10 01:02:29.368176 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-10 01:02:29.368191 | orchestrator | Thursday 10 April 2025 01:00:31 +0000 (0:00:00.589) 0:00:19.142 ******** 2025-04-10 01:02:29.368205 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.368219 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.368232 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.368246 | orchestrator | 2025-04-10 01:02:29.368260 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-10 01:02:29.368274 | orchestrator | Thursday 10 April 2025 01:00:32 +0000 (0:00:00.373) 0:00:19.515 ******** 2025-04-10 01:02:29.368288 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.368308 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.368322 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.368336 | orchestrator | 2025-04-10 01:02:29.368351 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-10 01:02:29.368365 | orchestrator | Thursday 10 April 2025 01:00:32 +0000 (0:00:00.351) 0:00:19.867 ******** 2025-04-10 01:02:29.368380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7af0ad6a--7281--507c--97d1--7760f3587d37-osd--block--7af0ad6a--7281--507c--97d1--7760f3587d37', 'dm-uuid-LVM-NjIIb6LQMrocij3EZUof8kffa8YMcdMc5e9g1Wb8LVmdWUUgPPS1gxSz6S3506Bt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52286b97--e205--54c6--a29d--cc3afdc4b583-osd--block--52286b97--e205--54c6--a29d--cc3afdc4b583', 'dm-uuid-LVM-HPSxuZ8DHDJ8ZqK8wEV3v0eAT4VCXs4Bx9O8QX2FWr2BjoMNYw0yToUCJN6qRdTD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e6570ad4--669c--53e9--93b8--24292f6b58fb-osd--block--e6570ad4--669c--53e9--93b8--24292f6b58fb', 'dm-uuid-LVM-g5uaQZJhqiIdcOYI8y1QX1dMjLzoaopSwfJSl0BpmhpQ35uuYncttW99JjewdXwF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--543b72d2--41b4--5023--b438--6662cb79109c-osd--block--543b72d2--41b4--5023--b438--6662cb79109c', 'dm-uuid-LVM-uaEs4yIc9u0sB5SzmJQhTTBLeir2in1damUAgW90tEPCYeVhVMEZ3tCsUHO3rvIT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47ce51ce--522f--5092--939d--97f529b04c78-osd--block--47ce51ce--522f--5092--939d--97f529b04c78', 'dm-uuid-LVM-4CDMFffTI1LTGWl8RXFR68tCtFdmcX9htiCP5EJoAD2X7cCUVwtP3sFnI33pMg1p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1024c186--728b--5ddc--b380--e3967fe3a792-osd--block--1024c186--728b--5ddc--b380--e3967fe3a792', 'dm-uuid-LVM-Y00XykzfuS4D5SX65I650rApYtaExU63DS3FPv6iOvr226g6HlMCZezHKcJljJG2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part1', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part14', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part15', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part16', 'scsi-SQEMU_QEMU_HARDDISK_28ff6eda-e1e7-4701-8f57-9f1d22e0371b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.368833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.368926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d967eed-d41f-4ed0-858d-bb16f205f817-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.368945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7af0ad6a--7281--507c--97d1--7760f3587d37-osd--block--7af0ad6a--7281--507c--97d1--7760f3587d37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p3XfEl-mz29-Jpky-r1OI-scKs-HVYX-H6St0W', 'scsi-0QEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7', 'scsi-SQEMU_QEMU_HARDDISK_e188828f-11b5-49b7-aa2c-198471f41cb7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.368969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--47ce51ce--522f--5092--939d--97f529b04c78-osd--block--47ce51ce--522f--5092--939d--97f529b04c78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GeYnBq-RSN3-X1LM-vEoe-Z3mQ-fDse-1VUCb1', 'scsi-0QEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1', 'scsi-SQEMU_QEMU_HARDDISK_7b59c1d3-d88b-4e69-8f5d-bfd6640ee0c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.368984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.369006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1024c186--728b--5ddc--b380--e3967fe3a792-osd--block--1024c186--728b--5ddc--b380--e3967fe3a792'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZzVfzF-Bh0M-b68W-hhnT-aaIn-Ppt1-K83hYV', 'scsi-0QEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644', 'scsi-SQEMU_QEMU_HARDDISK_8309ccf2-021f-4ba0-8871-1baa1ae2c644'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB'2025-04-10 01:02:29 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:29.369212 | orchestrator | 2025-04-10 01:02:29 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:29.369228 | orchestrator | 2025-04-10 01:02:29 | INFO  | Task 2f5df373-b5e9-4c1a-a876-5ba5dd977fff is in state SUCCESS 2025-04-10 01:02:29.369243 | orchestrator | 2025-04-10 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:29.369257 | orchestrator | , 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--52286b97--e205--54c6--a29d--cc3afdc4b583-osd--block--52286b97--e205--54c6--a29d--cc3afdc4b583'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xbmZTz-LB8C-hBHE-q55R-kKbE-kPyk-uviAl0', 'scsi-0QEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3', 'scsi-SQEMU_QEMU_HARDDISK_57ed073f-7848-4dd1-911d-b06790e5cae3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172', 'scsi-SQEMU_QEMU_HARDDISK_221f8640-be1f-4702-ab57-197a8a373172'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.369330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369345 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.369360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2', 'scsi-SQEMU_QEMU_HARDDISK_4f117f5c-a676-4195-9d53-4eb16ef4d9e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:02:29.369397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3113d67-a712-4d61-8002-b363d5a12e6a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369437 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.369452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e6570ad4--669c--53e9--93b8--24292f6b58fb-osd--block--e6570ad4--669c--53e9--93b8--24292f6b58fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y0A54c-DQxh-euLG-jj02-m3iO-QWD4-7QmJ9P', 'scsi-0QEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6', 'scsi-SQEMU_QEMU_HARDDISK_864e33c6-b4c3-48eb-91b8-2629744c3ba6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--543b72d2--41b4--5023--b438--6662cb79109c-osd--block--543b72d2--41b4--5023--b438--6662cb79109c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tCYsbb-8M5x-ZLLr-FLxc-Liar-cDie-nSwqm0', 'scsi-0QEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238', 'scsi-SQEMU_QEMU_HARDDISK_b0ed1186-9beb-4d4b-adab-3343747bf238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a', 'scsi-SQEMU_QEMU_HARDDISK_fa805255-2b65-45ba-aa52-d97cf6f3e06a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:02:29.369523 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.369537 | orchestrator | 2025-04-10 01:02:29.369551 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-10 01:02:29.369566 | orchestrator | Thursday 10 April 2025 01:00:33 +0000 (0:00:00.680) 0:00:20.547 ******** 2025-04-10 01:02:29.369580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-10 01:02:29.369594 | orchestrator | 2025-04-10 01:02:29.369608 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-10 01:02:29.369623 | orchestrator | Thursday 10 April 2025 01:00:34 +0000 (0:00:01.562) 0:00:22.110 ******** 2025-04-10 01:02:29.369637 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.369651 | orchestrator | 2025-04-10 01:02:29.369665 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-10 01:02:29.369679 | orchestrator | Thursday 10 April 2025 01:00:35 +0000 (0:00:00.164) 0:00:22.274 ******** 2025-04-10 01:02:29.369693 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.369707 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.369721 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.369735 | orchestrator | 2025-04-10 01:02:29.369749 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-10 01:02:29.369763 | orchestrator | Thursday 10 April 2025 01:00:35 +0000 (0:00:00.377) 0:00:22.651 ******** 2025-04-10 01:02:29.369777 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.369791 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.369805 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.369819 | orchestrator | 2025-04-10 01:02:29.369833 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-10 01:02:29.369882 | orchestrator | Thursday 10 April 2025 01:00:36 +0000 (0:00:00.706) 0:00:23.357 ******** 2025-04-10 01:02:29.369897 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.369911 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.369925 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.369939 | orchestrator | 2025-04-10 01:02:29.369953 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-10 01:02:29.369967 | orchestrator | Thursday 10 April 2025 01:00:36 +0000 (0:00:00.313) 0:00:23.671 ******** 2025-04-10 01:02:29.369981 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.369995 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.370009 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.370087 | orchestrator | 2025-04-10 01:02:29.370102 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-10 01:02:29.370116 | orchestrator | Thursday 10 April 2025 01:00:37 +0000 (0:00:00.929) 0:00:24.601 ******** 2025-04-10 01:02:29.370130 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.370145 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.370159 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.370173 | orchestrator | 2025-04-10 01:02:29.370187 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-10 01:02:29.370207 | orchestrator | Thursday 10 April 2025 01:00:37 +0000 (0:00:00.386) 0:00:24.988 ******** 2025-04-10 01:02:29.370222 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.370236 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.370250 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.370264 | orchestrator | 2025-04-10 01:02:29.370278 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-10 01:02:29.370292 | orchestrator | Thursday 10 April 2025 01:00:38 +0000 (0:00:00.493) 0:00:25.481 ******** 2025-04-10 01:02:29.370306 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.370337 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.370363 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.370388 | orchestrator | 2025-04-10 01:02:29.370413 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-10 01:02:29.370437 | orchestrator | Thursday 10 April 2025 01:00:38 +0000 (0:00:00.540) 0:00:26.022 ******** 2025-04-10 01:02:29.370461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:02:29.370485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:02:29.370509 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:02:29.370545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:02:29.370571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:02:29.370603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:02:29.370618 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.370632 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:02:29.370646 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:02:29.370660 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.370674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:02:29.370688 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.370702 | orchestrator | 2025-04-10 01:02:29.370716 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-10 01:02:29.370730 | orchestrator | Thursday 10 April 2025 01:00:39 +0000 (0:00:00.806) 0:00:26.829 ******** 2025-04-10 01:02:29.370744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:02:29.370758 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:02:29.370772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:02:29.370786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:02:29.370800 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:02:29.370814 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:02:29.370828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:02:29.370881 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.370896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:02:29.370910 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.370924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:02:29.370938 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.370952 | orchestrator | 2025-04-10 01:02:29.370966 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-10 01:02:29.370980 | orchestrator | Thursday 10 April 2025 01:00:40 +0000 (0:00:00.792) 0:00:27.621 ******** 2025-04-10 01:02:29.370995 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-10 01:02:29.371008 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-10 01:02:29.371022 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-10 01:02:29.371036 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-10 01:02:29.371050 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-10 01:02:29.371065 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-10 01:02:29.371079 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-10 01:02:29.371092 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-10 01:02:29.371106 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-10 01:02:29.371120 | orchestrator | 2025-04-10 01:02:29.371134 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-10 01:02:29.371148 | orchestrator | Thursday 10 April 2025 01:00:42 +0000 (0:00:02.400) 0:00:30.021 ******** 2025-04-10 01:02:29.371162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:02:29.371185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:02:29.371199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:02:29.371213 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.371227 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:02:29.371241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:02:29.371255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:02:29.371269 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.371283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:02:29.371297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:02:29.371311 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:02:29.371325 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.371339 | orchestrator | 2025-04-10 01:02:29.371353 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-10 01:02:29.371367 | orchestrator | Thursday 10 April 2025 01:00:43 +0000 (0:00:00.677) 0:00:30.699 ******** 2025-04-10 01:02:29.371381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-10 01:02:29.371395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-10 01:02:29.371408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-10 01:02:29.371422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-10 01:02:29.371436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-10 01:02:29.371450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-10 01:02:29.371464 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.371478 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.371492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-10 01:02:29.371506 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-10 01:02:29.371520 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-10 01:02:29.371533 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.371547 | orchestrator | 2025-04-10 01:02:29.371561 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-10 01:02:29.371576 | orchestrator | Thursday 10 April 2025 01:00:43 +0000 (0:00:00.445) 0:00:31.144 ******** 2025-04-10 01:02:29.371589 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:02:29.371605 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:02:29.371619 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:02:29.371633 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.371654 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:02:29.371668 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:02:29.371682 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:02:29.371696 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.371710 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-10 01:02:29.371724 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:02:29.371738 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:02:29.371752 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.371766 | orchestrator | 2025-04-10 01:02:29.371781 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-10 01:02:29.371795 | orchestrator | Thursday 10 April 2025 01:00:44 +0000 (0:00:00.452) 0:00:31.596 ******** 2025-04-10 01:02:29.371815 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:02:29.371829 | orchestrator | 2025-04-10 01:02:29.371896 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-10 01:02:29.371911 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.778) 0:00:32.375 ******** 2025-04-10 01:02:29.371926 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.371940 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.371954 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.371968 | orchestrator | 2025-04-10 01:02:29.371982 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-10 01:02:29.371995 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.356) 0:00:32.732 ******** 2025-04-10 01:02:29.372009 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372023 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.372037 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.372051 | orchestrator | 2025-04-10 01:02:29.372065 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-10 01:02:29.372079 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.333) 0:00:33.065 ******** 2025-04-10 01:02:29.372093 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372107 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.372121 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.372141 | orchestrator | 2025-04-10 01:02:29.372156 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-10 01:02:29.372170 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.321) 0:00:33.387 ******** 2025-04-10 01:02:29.372184 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.372198 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.372212 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.372226 | orchestrator | 2025-04-10 01:02:29.372239 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-10 01:02:29.372253 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.744) 0:00:34.132 ******** 2025-04-10 01:02:29.372267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:02:29.372280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:02:29.372292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:02:29.372304 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372317 | orchestrator | 2025-04-10 01:02:29.372329 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-10 01:02:29.372341 | orchestrator | Thursday 10 April 2025 01:00:47 +0000 (0:00:00.459) 0:00:34.591 ******** 2025-04-10 01:02:29.372354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:02:29.372371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:02:29.372383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:02:29.372396 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372408 | orchestrator | 2025-04-10 01:02:29.372421 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-10 01:02:29.372433 | orchestrator | Thursday 10 April 2025 01:00:47 +0000 (0:00:00.469) 0:00:35.061 ******** 2025-04-10 01:02:29.372446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:02:29.372458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:02:29.372470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:02:29.372483 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372495 | orchestrator | 2025-04-10 01:02:29.372508 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:02:29.372529 | orchestrator | Thursday 10 April 2025 01:00:48 +0000 (0:00:00.456) 0:00:35.517 ******** 2025-04-10 01:02:29.372542 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:02:29.372561 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:02:29.372574 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:02:29.372586 | orchestrator | 2025-04-10 01:02:29.372599 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-10 01:02:29.372611 | orchestrator | Thursday 10 April 2025 01:00:48 +0000 (0:00:00.396) 0:00:35.913 ******** 2025-04-10 01:02:29.372623 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-10 01:02:29.372636 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-10 01:02:29.372648 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-10 01:02:29.372661 | orchestrator | 2025-04-10 01:02:29.372673 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-10 01:02:29.372686 | orchestrator | Thursday 10 April 2025 01:00:49 +0000 (0:00:00.893) 0:00:36.806 ******** 2025-04-10 01:02:29.372698 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372710 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.372729 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.372742 | orchestrator | 2025-04-10 01:02:29.372754 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-10 01:02:29.372767 | orchestrator | Thursday 10 April 2025 01:00:49 +0000 (0:00:00.346) 0:00:37.153 ******** 2025-04-10 01:02:29.372779 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372791 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.372804 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.372816 | orchestrator | 2025-04-10 01:02:29.372828 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-10 01:02:29.372857 | orchestrator | Thursday 10 April 2025 01:00:50 +0000 (0:00:00.399) 0:00:37.553 ******** 2025-04-10 01:02:29.372870 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-10 01:02:29.372883 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.372895 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-10 01:02:29.372907 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.372920 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-10 01:02:29.372932 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.372945 | orchestrator | 2025-04-10 01:02:29.372957 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-10 01:02:29.372969 | orchestrator | Thursday 10 April 2025 01:00:51 +0000 (0:00:00.752) 0:00:38.306 ******** 2025-04-10 01:02:29.372982 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-10 01:02:29.372994 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.373007 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-10 01:02:29.373020 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.373032 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-10 01:02:29.373044 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.373057 | orchestrator | 2025-04-10 01:02:29.373069 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-10 01:02:29.373081 | orchestrator | Thursday 10 April 2025 01:00:51 +0000 (0:00:00.616) 0:00:38.922 ******** 2025-04-10 01:02:29.373094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-10 01:02:29.373106 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-10 01:02:29.373118 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-10 01:02:29.373130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-10 01:02:29.373143 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-10 01:02:29.373155 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-10 01:02:29.373167 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.373179 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-10 01:02:29.373198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-10 01:02:29.373210 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.373223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-10 01:02:29.373235 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.373248 | orchestrator | 2025-04-10 01:02:29.373260 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-10 01:02:29.373273 | orchestrator | Thursday 10 April 2025 01:00:52 +0000 (0:00:00.821) 0:00:39.744 ******** 2025-04-10 01:02:29.373285 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.373297 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.373309 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:02:29.373322 | orchestrator | 2025-04-10 01:02:29.373334 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-10 01:02:29.373347 | orchestrator | Thursday 10 April 2025 01:00:52 +0000 (0:00:00.315) 0:00:40.060 ******** 2025-04-10 01:02:29.373359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-10 01:02:29.373371 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:02:29.373383 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:02:29.373396 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-10 01:02:29.373408 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-10 01:02:29.373421 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-10 01:02:29.373433 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-10 01:02:29.373445 | orchestrator | 2025-04-10 01:02:29.373457 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-10 01:02:29.373470 | orchestrator | Thursday 10 April 2025 01:00:53 +0000 (0:00:01.078) 0:00:41.139 ******** 2025-04-10 01:02:29.373482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-10 01:02:29.373494 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:02:29.373506 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:02:29.373519 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-10 01:02:29.373531 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-10 01:02:29.373544 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-10 01:02:29.373561 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-10 01:02:29.373574 | orchestrator | 2025-04-10 01:02:29.373587 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-04-10 01:02:29.373604 | orchestrator | Thursday 10 April 2025 01:00:56 +0000 (0:00:02.103) 0:00:43.242 ******** 2025-04-10 01:02:29.373616 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:02:29.373629 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:02:29.373642 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-04-10 01:02:29.373654 | orchestrator | 2025-04-10 01:02:29.373667 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-04-10 01:02:29.373679 | orchestrator | Thursday 10 April 2025 01:00:56 +0000 (0:00:00.558) 0:00:43.800 ******** 2025-04-10 01:02:29.373692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:02:29.373707 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:02:29.373725 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:02:29.373738 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:02:29.373751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-10 01:02:29.373763 | orchestrator | 2025-04-10 01:02:29.373776 | orchestrator | TASK [generate keys] *********************************************************** 2025-04-10 01:02:29.373788 | orchestrator | Thursday 10 April 2025 01:01:36 +0000 (0:00:40.240) 0:01:24.041 ******** 2025-04-10 01:02:29.373800 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373813 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373917 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-04-10 01:02:29.373930 | orchestrator | 2025-04-10 01:02:29.373942 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-04-10 01:02:29.373954 | orchestrator | Thursday 10 April 2025 01:01:57 +0000 (0:00:21.017) 0:01:45.058 ******** 2025-04-10 01:02:29.373967 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373979 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.373991 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.374002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.374012 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.374048 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.374059 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-10 01:02:29.374069 | orchestrator | 2025-04-10 01:02:29.374079 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-04-10 01:02:29.374089 | orchestrator | Thursday 10 April 2025 01:02:07 +0000 (0:00:10.100) 0:01:55.159 ******** 2025-04-10 01:02:29.374099 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.374109 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-10 01:02:29.374120 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-10 01:02:29.374130 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:29.374140 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-10 01:02:29.374155 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-10 01:02:32.414534 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:32.414645 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-10 01:02:32.414661 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-10 01:02:32.414675 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:32.414688 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-10 01:02:32.414720 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-10 01:02:32.414733 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:32.414746 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-10 01:02:32.414758 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-10 01:02:32.414771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-10 01:02:32.414783 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-10 01:02:32.414796 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-10 01:02:32.414809 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-04-10 01:02:32.414822 | orchestrator | 2025-04-10 01:02:32.414892 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:02:32.414909 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-10 01:02:32.414924 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-04-10 01:02:32.414937 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-04-10 01:02:32.414950 | orchestrator | 2025-04-10 01:02:32.414963 | orchestrator | 2025-04-10 01:02:32.414975 | orchestrator | 2025-04-10 01:02:32.414987 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:02:32.415000 | orchestrator | Thursday 10 April 2025 01:02:26 +0000 (0:00:18.906) 0:02:14.066 ******** 2025-04-10 01:02:32.415012 | orchestrator | =============================================================================== 2025-04-10 01:02:32.415025 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.24s 2025-04-10 01:02:32.415037 | orchestrator | generate keys ---------------------------------------------------------- 21.02s 2025-04-10 01:02:32.415049 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.91s 2025-04-10 01:02:32.415062 | orchestrator | get keys from monitors ------------------------------------------------- 10.10s 2025-04-10 01:02:32.415074 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.43s 2025-04-10 01:02:32.415086 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 2.42s 2025-04-10 01:02:32.415099 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.40s 2025-04-10 01:02:32.415111 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.10s 2025-04-10 01:02:32.415123 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.56s 2025-04-10 01:02:32.415135 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.08s 2025-04-10 01:02:32.415147 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.93s 2025-04-10 01:02:32.415160 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.89s 2025-04-10 01:02:32.415172 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.89s 2025-04-10 01:02:32.415184 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.88s 2025-04-10 01:02:32.415217 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.82s 2025-04-10 01:02:32.415231 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.81s 2025-04-10 01:02:32.415243 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.79s 2025-04-10 01:02:32.415255 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.78s 2025-04-10 01:02:32.415268 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.76s 2025-04-10 01:02:32.415280 | orchestrator | ceph-facts : set_fact rgw_instances with rgw multisite ------------------ 0.75s 2025-04-10 01:02:32.415308 | orchestrator | 2025-04-10 01:02:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:32.419752 | orchestrator | 2025-04-10 01:02:32 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:32.421804 | orchestrator | 2025-04-10 01:02:32 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:35.482764 | orchestrator | 2025-04-10 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:35.482987 | orchestrator | 2025-04-10 01:02:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:35.485694 | orchestrator | 2025-04-10 01:02:35 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:35.492978 | orchestrator | 2025-04-10 01:02:35 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:38.557139 | orchestrator | 2025-04-10 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:38.557278 | orchestrator | 2025-04-10 01:02:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:38.558585 | orchestrator | 2025-04-10 01:02:38 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:38.560349 | orchestrator | 2025-04-10 01:02:38 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:38.561901 | orchestrator | 2025-04-10 01:02:38 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:38.562098 | orchestrator | 2025-04-10 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:41.609483 | orchestrator | 2025-04-10 01:02:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:41.609773 | orchestrator | 2025-04-10 01:02:41 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:41.611232 | orchestrator | 2025-04-10 01:02:41 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:41.615228 | orchestrator | 2025-04-10 01:02:41 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:44.668753 | orchestrator | 2025-04-10 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:44.668949 | orchestrator | 2025-04-10 01:02:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:44.669145 | orchestrator | 2025-04-10 01:02:44 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:44.671127 | orchestrator | 2025-04-10 01:02:44 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:44.672831 | orchestrator | 2025-04-10 01:02:44 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:47.726134 | orchestrator | 2025-04-10 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:47.726277 | orchestrator | 2025-04-10 01:02:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:47.729786 | orchestrator | 2025-04-10 01:02:47 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:47.732106 | orchestrator | 2025-04-10 01:02:47 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:47.733457 | orchestrator | 2025-04-10 01:02:47 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:47.733772 | orchestrator | 2025-04-10 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:50.776189 | orchestrator | 2025-04-10 01:02:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:50.778327 | orchestrator | 2025-04-10 01:02:50 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:50.780909 | orchestrator | 2025-04-10 01:02:50 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:50.783733 | orchestrator | 2025-04-10 01:02:50 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:53.827620 | orchestrator | 2025-04-10 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:53.827768 | orchestrator | 2025-04-10 01:02:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:53.828102 | orchestrator | 2025-04-10 01:02:53 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:53.829307 | orchestrator | 2025-04-10 01:02:53 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:53.830689 | orchestrator | 2025-04-10 01:02:53 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:56.891634 | orchestrator | 2025-04-10 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:56.891782 | orchestrator | 2025-04-10 01:02:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:56.892334 | orchestrator | 2025-04-10 01:02:56 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:56.893345 | orchestrator | 2025-04-10 01:02:56 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:56.895297 | orchestrator | 2025-04-10 01:02:56 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:02:59.944323 | orchestrator | 2025-04-10 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:02:59.944465 | orchestrator | 2025-04-10 01:02:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:02:59.945001 | orchestrator | 2025-04-10 01:02:59 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:02:59.946229 | orchestrator | 2025-04-10 01:02:59 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:02:59.947703 | orchestrator | 2025-04-10 01:02:59 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:03:02.996722 | orchestrator | 2025-04-10 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:02.996901 | orchestrator | 2025-04-10 01:03:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:02.997476 | orchestrator | 2025-04-10 01:03:02 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:03:02.998867 | orchestrator | 2025-04-10 01:03:02 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:03.000487 | orchestrator | 2025-04-10 01:03:02 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:03:06.046665 | orchestrator | 2025-04-10 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:06.046814 | orchestrator | 2025-04-10 01:03:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:06.047928 | orchestrator | 2025-04-10 01:03:06 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:03:06.050321 | orchestrator | 2025-04-10 01:03:06 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:06.052194 | orchestrator | 2025-04-10 01:03:06 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state STARTED 2025-04-10 01:03:09.106447 | orchestrator | 2025-04-10 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:09.106589 | orchestrator | 2025-04-10 01:03:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:09.110516 | orchestrator | 2025-04-10 01:03:09 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:03:09.113016 | orchestrator | 2025-04-10 01:03:09 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:09.116541 | orchestrator | 2025-04-10 01:03:09.116583 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-10 01:03:09.116598 | orchestrator | 2025-04-10 01:03:09.116631 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-04-10 01:03:09.116648 | orchestrator | 2025-04-10 01:03:09.116663 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-10 01:03:09.116678 | orchestrator | Thursday 10 April 2025 01:02:39 +0000 (0:00:00.490) 0:00:00.490 ******** 2025-04-10 01:03:09.116693 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-04-10 01:03:09.116709 | orchestrator | 2025-04-10 01:03:09.116724 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-10 01:03:09.116739 | orchestrator | Thursday 10 April 2025 01:02:40 +0000 (0:00:00.217) 0:00:00.707 ******** 2025-04-10 01:03:09.116755 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:03:09.116770 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-10 01:03:09.116785 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-10 01:03:09.116800 | orchestrator | 2025-04-10 01:03:09.116814 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-10 01:03:09.116829 | orchestrator | Thursday 10 April 2025 01:02:41 +0000 (0:00:00.942) 0:00:01.649 ******** 2025-04-10 01:03:09.116903 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-04-10 01:03:09.116918 | orchestrator | 2025-04-10 01:03:09.116932 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-10 01:03:09.116946 | orchestrator | Thursday 10 April 2025 01:02:41 +0000 (0:00:00.238) 0:00:01.888 ******** 2025-04-10 01:03:09.116960 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.116975 | orchestrator | 2025-04-10 01:03:09.116991 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-10 01:03:09.117005 | orchestrator | Thursday 10 April 2025 01:02:41 +0000 (0:00:00.612) 0:00:02.501 ******** 2025-04-10 01:03:09.117019 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117034 | orchestrator | 2025-04-10 01:03:09.117048 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-10 01:03:09.117062 | orchestrator | Thursday 10 April 2025 01:02:42 +0000 (0:00:00.142) 0:00:02.643 ******** 2025-04-10 01:03:09.117076 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117090 | orchestrator | 2025-04-10 01:03:09.117104 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-10 01:03:09.117117 | orchestrator | Thursday 10 April 2025 01:02:42 +0000 (0:00:00.444) 0:00:03.088 ******** 2025-04-10 01:03:09.117134 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117173 | orchestrator | 2025-04-10 01:03:09.117190 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-10 01:03:09.117206 | orchestrator | Thursday 10 April 2025 01:02:42 +0000 (0:00:00.143) 0:00:03.231 ******** 2025-04-10 01:03:09.117223 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117239 | orchestrator | 2025-04-10 01:03:09.117255 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-10 01:03:09.117271 | orchestrator | Thursday 10 April 2025 01:02:42 +0000 (0:00:00.129) 0:00:03.361 ******** 2025-04-10 01:03:09.117287 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117303 | orchestrator | 2025-04-10 01:03:09.117319 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-10 01:03:09.117335 | orchestrator | Thursday 10 April 2025 01:02:42 +0000 (0:00:00.132) 0:00:03.493 ******** 2025-04-10 01:03:09.117352 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.117369 | orchestrator | 2025-04-10 01:03:09.117386 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-10 01:03:09.117402 | orchestrator | Thursday 10 April 2025 01:02:43 +0000 (0:00:00.151) 0:00:03.645 ******** 2025-04-10 01:03:09.117418 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117435 | orchestrator | 2025-04-10 01:03:09.117450 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-10 01:03:09.117473 | orchestrator | Thursday 10 April 2025 01:02:43 +0000 (0:00:00.284) 0:00:03.930 ******** 2025-04-10 01:03:09.117490 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:03:09.117504 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:03:09.117518 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:03:09.117532 | orchestrator | 2025-04-10 01:03:09.117546 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-10 01:03:09.117560 | orchestrator | Thursday 10 April 2025 01:02:44 +0000 (0:00:00.708) 0:00:04.638 ******** 2025-04-10 01:03:09.117574 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.117588 | orchestrator | 2025-04-10 01:03:09.117602 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-10 01:03:09.117621 | orchestrator | Thursday 10 April 2025 01:02:44 +0000 (0:00:00.254) 0:00:04.893 ******** 2025-04-10 01:03:09.117635 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:03:09.117650 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:03:09.117664 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:03:09.117677 | orchestrator | 2025-04-10 01:03:09.117691 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-10 01:03:09.117705 | orchestrator | Thursday 10 April 2025 01:02:46 +0000 (0:00:02.013) 0:00:06.906 ******** 2025-04-10 01:03:09.117719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:03:09.117733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:03:09.117747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:03:09.117761 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.117775 | orchestrator | 2025-04-10 01:03:09.117789 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-10 01:03:09.117815 | orchestrator | Thursday 10 April 2025 01:02:46 +0000 (0:00:00.460) 0:00:07.366 ******** 2025-04-10 01:03:09.117858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-10 01:03:09.117888 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-10 01:03:09.117925 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-10 01:03:09.117940 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.117998 | orchestrator | 2025-04-10 01:03:09.118064 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-10 01:03:09.118082 | orchestrator | Thursday 10 April 2025 01:02:47 +0000 (0:00:00.819) 0:00:08.185 ******** 2025-04-10 01:03:09.118099 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:03:09.118116 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:03:09.118132 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-10 01:03:09.118147 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118162 | orchestrator | 2025-04-10 01:03:09.118177 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-10 01:03:09.118191 | orchestrator | Thursday 10 April 2025 01:02:47 +0000 (0:00:00.171) 0:00:08.357 ******** 2025-04-10 01:03:09.118208 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '8d3f9f2e9477', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-10 01:02:44.962015', 'end': '2025-04-10 01:02:45.006170', 'delta': '0:00:00.044155', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8d3f9f2e9477'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-10 01:03:09.118229 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'c110dddf57b9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-10 01:02:45.551455', 'end': '2025-04-10 01:02:45.602376', 'delta': '0:00:00.050921', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c110dddf57b9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-10 01:03:09.118257 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'bc01cd4365d9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-10 01:02:46.123480', 'end': '2025-04-10 01:02:46.164215', 'delta': '0:00:00.040735', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc01cd4365d9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-10 01:03:09.118282 | orchestrator | 2025-04-10 01:03:09.118298 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-10 01:03:09.118313 | orchestrator | Thursday 10 April 2025 01:02:47 +0000 (0:00:00.227) 0:00:08.584 ******** 2025-04-10 01:03:09.118328 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.118343 | orchestrator | 2025-04-10 01:03:09.118358 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-10 01:03:09.118373 | orchestrator | Thursday 10 April 2025 01:02:48 +0000 (0:00:00.251) 0:00:08.836 ******** 2025-04-10 01:03:09.118388 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-04-10 01:03:09.118402 | orchestrator | 2025-04-10 01:03:09.118417 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-10 01:03:09.118432 | orchestrator | Thursday 10 April 2025 01:02:49 +0000 (0:00:01.589) 0:00:10.426 ******** 2025-04-10 01:03:09.118447 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118461 | orchestrator | 2025-04-10 01:03:09.118476 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-10 01:03:09.118491 | orchestrator | Thursday 10 April 2025 01:02:49 +0000 (0:00:00.139) 0:00:10.565 ******** 2025-04-10 01:03:09.118507 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118522 | orchestrator | 2025-04-10 01:03:09.118536 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-10 01:03:09.118551 | orchestrator | Thursday 10 April 2025 01:02:50 +0000 (0:00:00.245) 0:00:10.811 ******** 2025-04-10 01:03:09.118566 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118581 | orchestrator | 2025-04-10 01:03:09.118596 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-10 01:03:09.118611 | orchestrator | Thursday 10 April 2025 01:02:50 +0000 (0:00:00.132) 0:00:10.943 ******** 2025-04-10 01:03:09.118626 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.118641 | orchestrator | 2025-04-10 01:03:09.118656 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-10 01:03:09.118670 | orchestrator | Thursday 10 April 2025 01:02:50 +0000 (0:00:00.126) 0:00:11.070 ******** 2025-04-10 01:03:09.118685 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118700 | orchestrator | 2025-04-10 01:03:09.118715 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-10 01:03:09.118730 | orchestrator | Thursday 10 April 2025 01:02:50 +0000 (0:00:00.227) 0:00:11.298 ******** 2025-04-10 01:03:09.118744 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118760 | orchestrator | 2025-04-10 01:03:09.118775 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-10 01:03:09.118790 | orchestrator | Thursday 10 April 2025 01:02:50 +0000 (0:00:00.123) 0:00:11.421 ******** 2025-04-10 01:03:09.118805 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118829 | orchestrator | 2025-04-10 01:03:09.118870 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-10 01:03:09.118885 | orchestrator | Thursday 10 April 2025 01:02:50 +0000 (0:00:00.134) 0:00:11.555 ******** 2025-04-10 01:03:09.118899 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118914 | orchestrator | 2025-04-10 01:03:09.118929 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-10 01:03:09.118948 | orchestrator | Thursday 10 April 2025 01:02:51 +0000 (0:00:00.170) 0:00:11.725 ******** 2025-04-10 01:03:09.118963 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.118977 | orchestrator | 2025-04-10 01:03:09.118991 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-10 01:03:09.119013 | orchestrator | Thursday 10 April 2025 01:02:51 +0000 (0:00:00.135) 0:00:11.861 ******** 2025-04-10 01:03:09.119027 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119041 | orchestrator | 2025-04-10 01:03:09.119055 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-10 01:03:09.119069 | orchestrator | Thursday 10 April 2025 01:02:51 +0000 (0:00:00.344) 0:00:12.206 ******** 2025-04-10 01:03:09.119083 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119097 | orchestrator | 2025-04-10 01:03:09.119111 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-10 01:03:09.119125 | orchestrator | Thursday 10 April 2025 01:02:51 +0000 (0:00:00.123) 0:00:12.329 ******** 2025-04-10 01:03:09.119139 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119153 | orchestrator | 2025-04-10 01:03:09.119167 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-10 01:03:09.119181 | orchestrator | Thursday 10 April 2025 01:02:51 +0000 (0:00:00.138) 0:00:12.468 ******** 2025-04-10 01:03:09.119195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-10 01:03:09.119343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part1', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part14', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part15', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part16', 'scsi-SQEMU_QEMU_HARDDISK_b8a544e2-f8fb-4bb9-a080-c9f48e09edc5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:03:09.119362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d97216ad-03db-4dc0-9fce-19fb462ce1e2', 'scsi-SQEMU_QEMU_HARDDISK_d97216ad-03db-4dc0-9fce-19fb462ce1e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:03:09.119378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b5996d2-64c6-4dbd-ad82-ae9f8c5fd05f', 'scsi-SQEMU_QEMU_HARDDISK_3b5996d2-64c6-4dbd-ad82-ae9f8c5fd05f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:03:09.119393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c91147a-8481-48ce-bf49-6c79ed393785', 'scsi-SQEMU_QEMU_HARDDISK_6c91147a-8481-48ce-bf49-6c79ed393785'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:03:09.119415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-10-00-02-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-10 01:03:09.119430 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119444 | orchestrator | 2025-04-10 01:03:09.119459 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-10 01:03:09.119473 | orchestrator | Thursday 10 April 2025 01:02:52 +0000 (0:00:00.291) 0:00:12.760 ******** 2025-04-10 01:03:09.119487 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119501 | orchestrator | 2025-04-10 01:03:09.119515 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-10 01:03:09.119529 | orchestrator | Thursday 10 April 2025 01:02:52 +0000 (0:00:00.251) 0:00:13.011 ******** 2025-04-10 01:03:09.119543 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119557 | orchestrator | 2025-04-10 01:03:09.119571 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-10 01:03:09.119585 | orchestrator | Thursday 10 April 2025 01:02:52 +0000 (0:00:00.164) 0:00:13.175 ******** 2025-04-10 01:03:09.119599 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119613 | orchestrator | 2025-04-10 01:03:09.119627 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-10 01:03:09.119641 | orchestrator | Thursday 10 April 2025 01:02:52 +0000 (0:00:00.119) 0:00:13.295 ******** 2025-04-10 01:03:09.119660 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.119674 | orchestrator | 2025-04-10 01:03:09.119689 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-10 01:03:09.119703 | orchestrator | Thursday 10 April 2025 01:02:53 +0000 (0:00:00.534) 0:00:13.829 ******** 2025-04-10 01:03:09.119717 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.119731 | orchestrator | 2025-04-10 01:03:09.119745 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-10 01:03:09.119758 | orchestrator | Thursday 10 April 2025 01:02:53 +0000 (0:00:00.122) 0:00:13.951 ******** 2025-04-10 01:03:09.119772 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.119786 | orchestrator | 2025-04-10 01:03:09.119800 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-10 01:03:09.119815 | orchestrator | Thursday 10 April 2025 01:02:53 +0000 (0:00:00.531) 0:00:14.483 ******** 2025-04-10 01:03:09.119829 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.119872 | orchestrator | 2025-04-10 01:03:09.119887 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-10 01:03:09.119901 | orchestrator | Thursday 10 April 2025 01:02:54 +0000 (0:00:00.365) 0:00:14.849 ******** 2025-04-10 01:03:09.119915 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119929 | orchestrator | 2025-04-10 01:03:09.119943 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-10 01:03:09.119957 | orchestrator | Thursday 10 April 2025 01:02:54 +0000 (0:00:00.231) 0:00:15.080 ******** 2025-04-10 01:03:09.119971 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.119992 | orchestrator | 2025-04-10 01:03:09.120006 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-10 01:03:09.120020 | orchestrator | Thursday 10 April 2025 01:02:54 +0000 (0:00:00.151) 0:00:15.232 ******** 2025-04-10 01:03:09.120034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:03:09.120048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:03:09.120062 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:03:09.120076 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.120089 | orchestrator | 2025-04-10 01:03:09.120104 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-10 01:03:09.120118 | orchestrator | Thursday 10 April 2025 01:02:55 +0000 (0:00:00.497) 0:00:15.729 ******** 2025-04-10 01:03:09.120131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:03:09.120145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:03:09.120159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:03:09.120173 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.120187 | orchestrator | 2025-04-10 01:03:09.120201 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-10 01:03:09.120226 | orchestrator | Thursday 10 April 2025 01:02:55 +0000 (0:00:00.482) 0:00:16.211 ******** 2025-04-10 01:03:09.120240 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:03:09.120255 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-10 01:03:09.120269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-10 01:03:09.120283 | orchestrator | 2025-04-10 01:03:09.120297 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-10 01:03:09.120311 | orchestrator | Thursday 10 April 2025 01:02:56 +0000 (0:00:01.213) 0:00:17.425 ******** 2025-04-10 01:03:09.120325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:03:09.120339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:03:09.120353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:03:09.120366 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.120380 | orchestrator | 2025-04-10 01:03:09.120394 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-10 01:03:09.120408 | orchestrator | Thursday 10 April 2025 01:02:57 +0000 (0:00:00.228) 0:00:17.654 ******** 2025-04-10 01:03:09.120422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-10 01:03:09.120436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-10 01:03:09.120450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-10 01:03:09.120464 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.120478 | orchestrator | 2025-04-10 01:03:09.120492 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-10 01:03:09.120506 | orchestrator | Thursday 10 April 2025 01:02:57 +0000 (0:00:00.214) 0:00:17.868 ******** 2025-04-10 01:03:09.120520 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-10 01:03:09.120534 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-10 01:03:09.120548 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-10 01:03:09.120562 | orchestrator | 2025-04-10 01:03:09.120576 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-10 01:03:09.120596 | orchestrator | Thursday 10 April 2025 01:02:57 +0000 (0:00:00.216) 0:00:18.085 ******** 2025-04-10 01:03:09.120610 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.120624 | orchestrator | 2025-04-10 01:03:09.120638 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-10 01:03:09.120652 | orchestrator | Thursday 10 April 2025 01:02:57 +0000 (0:00:00.133) 0:00:18.219 ******** 2025-04-10 01:03:09.120687 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:09.120702 | orchestrator | 2025-04-10 01:03:09.120716 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-10 01:03:09.120730 | orchestrator | Thursday 10 April 2025 01:02:57 +0000 (0:00:00.358) 0:00:18.578 ******** 2025-04-10 01:03:09.120744 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:03:09.120765 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:03:09.120779 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:03:09.120794 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-10 01:03:09.120808 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-10 01:03:09.120822 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-10 01:03:09.120870 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-10 01:03:09.120887 | orchestrator | 2025-04-10 01:03:09.120902 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-10 01:03:09.120916 | orchestrator | Thursday 10 April 2025 01:02:58 +0000 (0:00:00.896) 0:00:19.474 ******** 2025-04-10 01:03:09.120930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-10 01:03:09.120944 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-10 01:03:09.120958 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-10 01:03:09.120972 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-10 01:03:09.120985 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-10 01:03:09.120999 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-10 01:03:09.121013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-10 01:03:09.121027 | orchestrator | 2025-04-10 01:03:09.121041 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-04-10 01:03:09.121055 | orchestrator | Thursday 10 April 2025 01:03:00 +0000 (0:00:01.569) 0:00:21.043 ******** 2025-04-10 01:03:09.121069 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:09.121083 | orchestrator | 2025-04-10 01:03:09.121097 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-04-10 01:03:09.121111 | orchestrator | Thursday 10 April 2025 01:03:00 +0000 (0:00:00.467) 0:00:21.510 ******** 2025-04-10 01:03:09.121125 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:03:09.121139 | orchestrator | 2025-04-10 01:03:09.121153 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-04-10 01:03:09.121173 | orchestrator | Thursday 10 April 2025 01:03:01 +0000 (0:00:00.603) 0:00:22.114 ******** 2025-04-10 01:03:09.121187 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-04-10 01:03:09.121201 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-04-10 01:03:09.121215 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-04-10 01:03:09.121229 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-04-10 01:03:09.121243 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-04-10 01:03:09.121257 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-04-10 01:03:09.121270 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-04-10 01:03:09.121284 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-04-10 01:03:09.121298 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-04-10 01:03:09.121319 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-04-10 01:03:09.121333 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-04-10 01:03:09.121347 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-04-10 01:03:09.121361 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-04-10 01:03:09.121375 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-04-10 01:03:09.121389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-04-10 01:03:09.121402 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-04-10 01:03:09.121416 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-04-10 01:03:09.121430 | orchestrator | 2025-04-10 01:03:09.121444 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:03:09.121459 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-10 01:03:09.121474 | orchestrator | 2025-04-10 01:03:09.121487 | orchestrator | 2025-04-10 01:03:09.121501 | orchestrator | 2025-04-10 01:03:09.121515 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:03:09.121529 | orchestrator | Thursday 10 April 2025 01:03:07 +0000 (0:00:06.250) 0:00:28.365 ******** 2025-04-10 01:03:09.121543 | orchestrator | =============================================================================== 2025-04-10 01:03:09.121557 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.25s 2025-04-10 01:03:09.121571 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.01s 2025-04-10 01:03:09.121586 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.59s 2025-04-10 01:03:09.121606 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.57s 2025-04-10 01:03:12.166315 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.21s 2025-04-10 01:03:12.166436 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.94s 2025-04-10 01:03:12.166455 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.90s 2025-04-10 01:03:12.166470 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.82s 2025-04-10 01:03:12.166484 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.71s 2025-04-10 01:03:12.166498 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.61s 2025-04-10 01:03:12.166512 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.60s 2025-04-10 01:03:12.166547 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.53s 2025-04-10 01:03:12.166562 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.53s 2025-04-10 01:03:12.166576 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.50s 2025-04-10 01:03:12.166590 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.48s 2025-04-10 01:03:12.166604 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.47s 2025-04-10 01:03:12.166618 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.46s 2025-04-10 01:03:12.166632 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.44s 2025-04-10 01:03:12.166646 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.37s 2025-04-10 01:03:12.166660 | orchestrator | ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli --- 0.36s 2025-04-10 01:03:12.166674 | orchestrator | 2025-04-10 01:03:09 | INFO  | Task 4bcd8672-3553-4681-8885-b1cba39d7db9 is in state SUCCESS 2025-04-10 01:03:12.166714 | orchestrator | 2025-04-10 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:12.166747 | orchestrator | 2025-04-10 01:03:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:12.167658 | orchestrator | 2025-04-10 01:03:12 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state STARTED 2025-04-10 01:03:12.168939 | orchestrator | 2025-04-10 01:03:12 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:15.217058 | orchestrator | 2025-04-10 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:15.217227 | orchestrator | 2025-04-10 01:03:15 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:15.220350 | orchestrator | 2025-04-10 01:03:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:15.221023 | orchestrator | 2025-04-10 01:03:15 | INFO  | Task 659e06f0-248c-425e-8231-66eca47044a4 is in state SUCCESS 2025-04-10 01:03:15.222700 | orchestrator | 2025-04-10 01:03:15 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:18.271991 | orchestrator | 2025-04-10 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:18.272131 | orchestrator | 2025-04-10 01:03:18 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:18.273082 | orchestrator | 2025-04-10 01:03:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:18.274596 | orchestrator | 2025-04-10 01:03:18 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:21.314864 | orchestrator | 2025-04-10 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:21.315001 | orchestrator | 2025-04-10 01:03:21 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:21.316524 | orchestrator | 2025-04-10 01:03:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:21.319282 | orchestrator | 2025-04-10 01:03:21 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:21.320973 | orchestrator | 2025-04-10 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:24.366184 | orchestrator | 2025-04-10 01:03:24 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:24.368258 | orchestrator | 2025-04-10 01:03:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:24.370093 | orchestrator | 2025-04-10 01:03:24 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state STARTED 2025-04-10 01:03:27.418917 | orchestrator | 2025-04-10 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:27.419060 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:27.422136 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:27.425638 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:27.426333 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:27.426374 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task 4d1a1334-d267-4f21-9488-55fff3443eaa is in state SUCCESS 2025-04-10 01:03:27.427815 | orchestrator | 2025-04-10 01:03:27.427919 | orchestrator | 2025-04-10 01:03:27.427930 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-04-10 01:03:27.427938 | orchestrator | 2025-04-10 01:03:27.427965 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-04-10 01:03:27.427973 | orchestrator | Thursday 10 April 2025 01:02:30 +0000 (0:00:00.149) 0:00:00.150 ******** 2025-04-10 01:03:27.427981 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-10 01:03:27.427988 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-10 01:03:27.427996 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-10 01:03:27.428004 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-10 01:03:27.428046 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-10 01:03:27.428054 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-10 01:03:27.428062 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-10 01:03:27.428069 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-10 01:03:27.428077 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-10 01:03:27.428155 | orchestrator | 2025-04-10 01:03:27.428165 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-04-10 01:03:27.428173 | orchestrator | Thursday 10 April 2025 01:02:33 +0000 (0:00:03.041) 0:00:03.191 ******** 2025-04-10 01:03:27.428180 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-10 01:03:27.428188 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-10 01:03:27.428195 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-10 01:03:27.428203 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-10 01:03:27.428210 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-10 01:03:27.428218 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-10 01:03:27.428226 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-10 01:03:27.428233 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-10 01:03:27.428241 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-10 01:03:27.428248 | orchestrator | 2025-04-10 01:03:27.428267 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-04-10 01:03:27.428275 | orchestrator | Thursday 10 April 2025 01:02:33 +0000 (0:00:00.255) 0:00:03.446 ******** 2025-04-10 01:03:27.428283 | orchestrator | ok: [testbed-manager] => { 2025-04-10 01:03:27.428293 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-04-10 01:03:27.428301 | orchestrator | } 2025-04-10 01:03:27.428544 | orchestrator | 2025-04-10 01:03:27.428555 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-04-10 01:03:27.428563 | orchestrator | Thursday 10 April 2025 01:02:34 +0000 (0:00:00.176) 0:00:03.622 ******** 2025-04-10 01:03:27.428570 | orchestrator | changed: [testbed-manager] 2025-04-10 01:03:27.428578 | orchestrator | 2025-04-10 01:03:27.428586 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-04-10 01:03:27.428593 | orchestrator | Thursday 10 April 2025 01:03:08 +0000 (0:00:34.420) 0:00:38.043 ******** 2025-04-10 01:03:27.428602 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-04-10 01:03:27.428610 | orchestrator | 2025-04-10 01:03:27.428617 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-04-10 01:03:27.428625 | orchestrator | Thursday 10 April 2025 01:03:08 +0000 (0:00:00.463) 0:00:38.507 ******** 2025-04-10 01:03:27.428641 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-04-10 01:03:27.428650 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-04-10 01:03:27.428657 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-04-10 01:03:27.428665 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-04-10 01:03:27.428673 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-04-10 01:03:27.428702 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-04-10 01:03:27.428711 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-04-10 01:03:27.428719 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-04-10 01:03:27.428726 | orchestrator | 2025-04-10 01:03:27.428734 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-04-10 01:03:27.428742 | orchestrator | Thursday 10 April 2025 01:03:11 +0000 (0:00:03.017) 0:00:41.524 ******** 2025-04-10 01:03:27.428749 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:03:27.428757 | orchestrator | 2025-04-10 01:03:27.428765 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:03:27.428773 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 01:03:27.428781 | orchestrator | 2025-04-10 01:03:27.428788 | orchestrator | Thursday 10 April 2025 01:03:11 +0000 (0:00:00.032) 0:00:41.557 ******** 2025-04-10 01:03:27.428796 | orchestrator | =============================================================================== 2025-04-10 01:03:27.428803 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 34.42s 2025-04-10 01:03:27.428811 | orchestrator | Check ceph keys --------------------------------------------------------- 3.04s 2025-04-10 01:03:27.428819 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.02s 2025-04-10 01:03:27.428826 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.46s 2025-04-10 01:03:27.428856 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.26s 2025-04-10 01:03:27.428870 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-04-10 01:03:27.428883 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-04-10 01:03:27.428895 | orchestrator | 2025-04-10 01:03:27.428907 | orchestrator | 2025-04-10 01:03:27.428915 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:03:27.428922 | orchestrator | 2025-04-10 01:03:27.428930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:03:27.428938 | orchestrator | Thursday 10 April 2025 01:00:39 +0000 (0:00:00.396) 0:00:00.396 ******** 2025-04-10 01:03:27.428945 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.428954 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:03:27.428961 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:03:27.428969 | orchestrator | 2025-04-10 01:03:27.428982 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:03:27.428990 | orchestrator | Thursday 10 April 2025 01:00:40 +0000 (0:00:00.492) 0:00:00.889 ******** 2025-04-10 01:03:27.428998 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-10 01:03:27.429005 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-10 01:03:27.429013 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-10 01:03:27.429021 | orchestrator | 2025-04-10 01:03:27.429028 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-04-10 01:03:27.429036 | orchestrator | 2025-04-10 01:03:27.429043 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-10 01:03:27.429051 | orchestrator | Thursday 10 April 2025 01:00:40 +0000 (0:00:00.448) 0:00:01.337 ******** 2025-04-10 01:03:27.429059 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:03:27.429067 | orchestrator | 2025-04-10 01:03:27.429075 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-04-10 01:03:27.429082 | orchestrator | Thursday 10 April 2025 01:00:41 +0000 (0:00:00.938) 0:00:02.276 ******** 2025-04-10 01:03:27.429093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429223 | orchestrator | 2025-04-10 01:03:27.429231 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-04-10 01:03:27.429238 | orchestrator | Thursday 10 April 2025 01:00:44 +0000 (0:00:02.951) 0:00:05.227 ******** 2025-04-10 01:03:27.429246 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-04-10 01:03:27.429254 | orchestrator | 2025-04-10 01:03:27.429266 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-04-10 01:03:27.429274 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.619) 0:00:05.846 ******** 2025-04-10 01:03:27.429282 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.429290 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:03:27.429297 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:03:27.429305 | orchestrator | 2025-04-10 01:03:27.429312 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-04-10 01:03:27.429320 | orchestrator | Thursday 10 April 2025 01:00:45 +0000 (0:00:00.464) 0:00:06.311 ******** 2025-04-10 01:03:27.429327 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:03:27.429335 | orchestrator | 2025-04-10 01:03:27.429342 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-10 01:03:27.429350 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.467) 0:00:06.779 ******** 2025-04-10 01:03:27.429357 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:03:27.429365 | orchestrator | 2025-04-10 01:03:27.429372 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-04-10 01:03:27.429380 | orchestrator | Thursday 10 April 2025 01:00:46 +0000 (0:00:00.671) 0:00:07.450 ******** 2025-04-10 01:03:27.429388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429480 | orchestrator | 2025-04-10 01:03:27.429488 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-04-10 01:03:27.429496 | orchestrator | Thursday 10 April 2025 01:00:50 +0000 (0:00:03.472) 0:00:10.923 ******** 2025-04-10 01:03:27.429504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 01:03:27.429512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 01:03:27.429528 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.429540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 01:03:27.429553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 01:03:27.429569 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.429577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 01:03:27.429585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 01:03:27.429609 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.429617 | orchestrator | 2025-04-10 01:03:27.429625 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-04-10 01:03:27.429632 | orchestrator | Thursday 10 April 2025 01:00:51 +0000 (0:00:01.291) 0:00:12.214 ******** 2025-04-10 01:03:27.429640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 01:03:27.429649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 01:03:27.429664 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.429673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 01:03:27.429689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 01:03:27.429705 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.429713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-10 01:03:27.429721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-10 01:03:27.429737 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.429744 | orchestrator | 2025-04-10 01:03:27.429754 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-04-10 01:03:27.429766 | orchestrator | Thursday 10 April 2025 01:00:53 +0000 (0:00:01.577) 0:00:13.791 ******** 2025-04-10 01:03:27.429785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.429933 | orchestrator | 2025-04-10 01:03:27.429941 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-04-10 01:03:27.429949 | orchestrator | Thursday 10 April 2025 01:00:56 +0000 (0:00:03.495) 0:00:17.287 ******** 2025-04-10 01:03:27.429957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.429985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.429994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.430002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.430010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.430056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.430071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.430080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.430088 | orchestrator | 2025-04-10 01:03:27.430096 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-04-10 01:03:27.430103 | orchestrator | Thursday 10 April 2025 01:01:02 +0000 (0:00:05.836) 0:00:23.123 ******** 2025-04-10 01:03:27.430111 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.430118 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:03:27.430126 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:03:27.430133 | orchestrator | 2025-04-10 01:03:27.430141 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-04-10 01:03:27.430148 | orchestrator | Thursday 10 April 2025 01:01:04 +0000 (0:00:01.975) 0:00:25.099 ******** 2025-04-10 01:03:27.430156 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.430238 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.430246 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.430253 | orchestrator | 2025-04-10 01:03:27.430261 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-04-10 01:03:27.430268 | orchestrator | Thursday 10 April 2025 01:01:05 +0000 (0:00:01.195) 0:00:26.295 ******** 2025-04-10 01:03:27.430276 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.430283 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.430291 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.430298 | orchestrator | 2025-04-10 01:03:27.430306 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-04-10 01:03:27.430313 | orchestrator | Thursday 10 April 2025 01:01:05 +0000 (0:00:00.457) 0:00:26.752 ******** 2025-04-10 01:03:27.430320 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.430328 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.430344 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.430352 | orchestrator | 2025-04-10 01:03:27.430360 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-04-10 01:03:27.430372 | orchestrator | Thursday 10 April 2025 01:01:06 +0000 (0:00:00.496) 0:00:27.249 ******** 2025-04-10 01:03:27.430384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.430398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.430418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.430433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.430445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.430465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-10 01:03:27.430478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.430498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.430510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.430518 | orchestrator | 2025-04-10 01:03:27.430525 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-10 01:03:27.430533 | orchestrator | Thursday 10 April 2025 01:01:09 +0000 (0:00:02.763) 0:00:30.013 ******** 2025-04-10 01:03:27.430541 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.430548 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.430556 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.430563 | orchestrator | 2025-04-10 01:03:27.430571 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-04-10 01:03:27.430578 | orchestrator | Thursday 10 April 2025 01:01:09 +0000 (0:00:00.293) 0:00:30.307 ******** 2025-04-10 01:03:27.430586 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-10 01:03:27.430594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-10 01:03:27.430606 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-10 01:03:27.430613 | orchestrator | 2025-04-10 01:03:27.430621 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-04-10 01:03:27.430629 | orchestrator | Thursday 10 April 2025 01:01:11 +0000 (0:00:02.456) 0:00:32.763 ******** 2025-04-10 01:03:27.430636 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:03:27.430644 | orchestrator | 2025-04-10 01:03:27.430656 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-04-10 01:03:27.430663 | orchestrator | Thursday 10 April 2025 01:01:12 +0000 (0:00:00.798) 0:00:33.562 ******** 2025-04-10 01:03:27.430670 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.430678 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.430685 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.430693 | orchestrator | 2025-04-10 01:03:27.430700 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-04-10 01:03:27.430708 | orchestrator | Thursday 10 April 2025 01:01:14 +0000 (0:00:01.323) 0:00:34.886 ******** 2025-04-10 01:03:27.430715 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-10 01:03:27.430723 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-10 01:03:27.430730 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:03:27.430738 | orchestrator | 2025-04-10 01:03:27.430745 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-04-10 01:03:27.430753 | orchestrator | Thursday 10 April 2025 01:01:14 +0000 (0:00:00.815) 0:00:35.701 ******** 2025-04-10 01:03:27.430760 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.430769 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:03:27.430776 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:03:27.430784 | orchestrator | 2025-04-10 01:03:27.430791 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-04-10 01:03:27.430799 | orchestrator | Thursday 10 April 2025 01:01:15 +0000 (0:00:00.321) 0:00:36.023 ******** 2025-04-10 01:03:27.430806 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-10 01:03:27.430814 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-10 01:03:27.430821 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-10 01:03:27.430829 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-10 01:03:27.430879 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-10 01:03:27.430888 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-10 01:03:27.430897 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-10 01:03:27.430906 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-10 01:03:27.430914 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-10 01:03:27.430923 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-10 01:03:27.430932 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-10 01:03:27.430944 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-10 01:03:27.430954 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-10 01:03:27.430962 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-10 01:03:27.430974 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-10 01:03:27.430988 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-10 01:03:27.430996 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-10 01:03:27.431005 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-10 01:03:27.431014 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-10 01:03:27.431023 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-10 01:03:27.431031 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-10 01:03:27.431040 | orchestrator | 2025-04-10 01:03:27.431048 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-04-10 01:03:27.431056 | orchestrator | Thursday 10 April 2025 01:01:28 +0000 (0:00:13.128) 0:00:49.152 ******** 2025-04-10 01:03:27.431064 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-10 01:03:27.431072 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-10 01:03:27.431080 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-10 01:03:27.431088 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-10 01:03:27.431096 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-10 01:03:27.431104 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-10 01:03:27.431112 | orchestrator | 2025-04-10 01:03:27.431120 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-04-10 01:03:27.431127 | orchestrator | Thursday 10 April 2025 01:01:31 +0000 (0:00:03.374) 0:00:52.526 ******** 2025-04-10 01:03:27.431136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.431146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.431160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-10 01:03:27.431174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.431183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.431191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-10 01:03:27.431199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.431208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.431224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-10 01:03:27.431233 | orchestrator | 2025-04-10 01:03:27.431240 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-10 01:03:27.431247 | orchestrator | Thursday 10 April 2025 01:01:34 +0000 (0:00:02.826) 0:00:55.353 ******** 2025-04-10 01:03:27.431254 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.431261 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.431268 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.431275 | orchestrator | 2025-04-10 01:03:27.431283 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-04-10 01:03:27.431290 | orchestrator | Thursday 10 April 2025 01:01:34 +0000 (0:00:00.281) 0:00:55.635 ******** 2025-04-10 01:03:27.431297 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431304 | orchestrator | 2025-04-10 01:03:27.431311 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-04-10 01:03:27.431318 | orchestrator | Thursday 10 April 2025 01:01:37 +0000 (0:00:02.641) 0:00:58.276 ******** 2025-04-10 01:03:27.431325 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431332 | orchestrator | 2025-04-10 01:03:27.431339 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-04-10 01:03:27.431349 | orchestrator | Thursday 10 April 2025 01:01:39 +0000 (0:00:02.313) 0:01:00.589 ******** 2025-04-10 01:03:27.431356 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.431363 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:03:27.431370 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:03:27.431377 | orchestrator | 2025-04-10 01:03:27.431384 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-04-10 01:03:27.431391 | orchestrator | Thursday 10 April 2025 01:01:40 +0000 (0:00:00.934) 0:01:01.524 ******** 2025-04-10 01:03:27.431398 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.431405 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:03:27.431412 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:03:27.431419 | orchestrator | 2025-04-10 01:03:27.431426 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-04-10 01:03:27.431433 | orchestrator | Thursday 10 April 2025 01:01:41 +0000 (0:00:00.294) 0:01:01.818 ******** 2025-04-10 01:03:27.431440 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.431447 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.431454 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.431461 | orchestrator | 2025-04-10 01:03:27.431469 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-04-10 01:03:27.431476 | orchestrator | Thursday 10 April 2025 01:01:41 +0000 (0:00:00.478) 0:01:02.297 ******** 2025-04-10 01:03:27.431483 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431490 | orchestrator | 2025-04-10 01:03:27.431502 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-04-10 01:03:27.431513 | orchestrator | Thursday 10 April 2025 01:01:55 +0000 (0:00:13.610) 0:01:15.908 ******** 2025-04-10 01:03:27.431524 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431534 | orchestrator | 2025-04-10 01:03:27.431545 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-10 01:03:27.431552 | orchestrator | Thursday 10 April 2025 01:02:04 +0000 (0:00:09.506) 0:01:25.414 ******** 2025-04-10 01:03:27.431564 | orchestrator | 2025-04-10 01:03:27.431571 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-10 01:03:27.431578 | orchestrator | Thursday 10 April 2025 01:02:04 +0000 (0:00:00.057) 0:01:25.472 ******** 2025-04-10 01:03:27.431585 | orchestrator | 2025-04-10 01:03:27.431592 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-10 01:03:27.431599 | orchestrator | Thursday 10 April 2025 01:02:04 +0000 (0:00:00.054) 0:01:25.526 ******** 2025-04-10 01:03:27.431606 | orchestrator | 2025-04-10 01:03:27.431613 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-04-10 01:03:27.431620 | orchestrator | Thursday 10 April 2025 01:02:04 +0000 (0:00:00.057) 0:01:25.584 ******** 2025-04-10 01:03:27.431627 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431634 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:03:27.431641 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:03:27.431648 | orchestrator | 2025-04-10 01:03:27.431655 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-04-10 01:03:27.431662 | orchestrator | Thursday 10 April 2025 01:02:19 +0000 (0:00:14.346) 0:01:39.931 ******** 2025-04-10 01:03:27.431669 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:03:27.431676 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431683 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:03:27.431690 | orchestrator | 2025-04-10 01:03:27.431697 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-04-10 01:03:27.431704 | orchestrator | Thursday 10 April 2025 01:02:29 +0000 (0:00:09.923) 0:01:49.854 ******** 2025-04-10 01:03:27.431711 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431718 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:03:27.431725 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:03:27.431732 | orchestrator | 2025-04-10 01:03:27.431739 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-10 01:03:27.431746 | orchestrator | Thursday 10 April 2025 01:02:39 +0000 (0:00:10.400) 0:02:00.255 ******** 2025-04-10 01:03:27.431753 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:03:27.431760 | orchestrator | 2025-04-10 01:03:27.431767 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-04-10 01:03:27.431777 | orchestrator | Thursday 10 April 2025 01:02:40 +0000 (0:00:00.897) 0:02:01.152 ******** 2025-04-10 01:03:27.431785 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.431792 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:03:27.431799 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:03:27.431806 | orchestrator | 2025-04-10 01:03:27.431813 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-04-10 01:03:27.431820 | orchestrator | Thursday 10 April 2025 01:02:41 +0000 (0:00:01.071) 0:02:02.224 ******** 2025-04-10 01:03:27.431827 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:03:27.431850 | orchestrator | 2025-04-10 01:03:27.431859 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-04-10 01:03:27.431866 | orchestrator | Thursday 10 April 2025 01:02:43 +0000 (0:00:01.629) 0:02:03.854 ******** 2025-04-10 01:03:27.431873 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-04-10 01:03:27.431880 | orchestrator | 2025-04-10 01:03:27.431887 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-04-10 01:03:27.431894 | orchestrator | Thursday 10 April 2025 01:02:53 +0000 (0:00:10.456) 0:02:14.311 ******** 2025-04-10 01:03:27.431902 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-04-10 01:03:27.431909 | orchestrator | 2025-04-10 01:03:27.431916 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-04-10 01:03:27.431926 | orchestrator | Thursday 10 April 2025 01:03:12 +0000 (0:00:19.367) 0:02:33.679 ******** 2025-04-10 01:03:27.431933 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-04-10 01:03:27.431947 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-04-10 01:03:27.431954 | orchestrator | 2025-04-10 01:03:27.431961 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-04-10 01:03:27.431968 | orchestrator | Thursday 10 April 2025 01:03:20 +0000 (0:00:07.261) 0:02:40.941 ******** 2025-04-10 01:03:27.431975 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.431983 | orchestrator | 2025-04-10 01:03:27.431990 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-04-10 01:03:27.431997 | orchestrator | Thursday 10 April 2025 01:03:20 +0000 (0:00:00.156) 0:02:41.097 ******** 2025-04-10 01:03:27.432004 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.432011 | orchestrator | 2025-04-10 01:03:27.432018 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-04-10 01:03:27.432025 | orchestrator | Thursday 10 April 2025 01:03:20 +0000 (0:00:00.118) 0:02:41.216 ******** 2025-04-10 01:03:27.432032 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.432039 | orchestrator | 2025-04-10 01:03:27.432046 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-04-10 01:03:27.432055 | orchestrator | Thursday 10 April 2025 01:03:20 +0000 (0:00:00.157) 0:02:41.373 ******** 2025-04-10 01:03:27.432063 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.432070 | orchestrator | 2025-04-10 01:03:27.432077 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-04-10 01:03:27.432085 | orchestrator | Thursday 10 April 2025 01:03:21 +0000 (0:00:00.454) 0:02:41.828 ******** 2025-04-10 01:03:27.432092 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:03:27.432099 | orchestrator | 2025-04-10 01:03:27.432106 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-10 01:03:27.432113 | orchestrator | Thursday 10 April 2025 01:03:24 +0000 (0:00:03.378) 0:02:45.207 ******** 2025-04-10 01:03:27.432120 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:03:27.432127 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:03:27.432134 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:03:27.432144 | orchestrator | 2025-04-10 01:03:27.432152 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:03:27.432159 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-10 01:03:27.432167 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-10 01:03:27.432174 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-10 01:03:27.432182 | orchestrator | 2025-04-10 01:03:27.432189 | orchestrator | 2025-04-10 01:03:27.432196 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:03:27.432203 | orchestrator | Thursday 10 April 2025 01:03:24 +0000 (0:00:00.556) 0:02:45.764 ******** 2025-04-10 01:03:27.432210 | orchestrator | =============================================================================== 2025-04-10 01:03:27.432217 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.37s 2025-04-10 01:03:27.432224 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.35s 2025-04-10 01:03:27.432231 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.61s 2025-04-10 01:03:27.432238 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 13.13s 2025-04-10 01:03:27.432245 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.46s 2025-04-10 01:03:27.432252 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.40s 2025-04-10 01:03:27.432259 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.92s 2025-04-10 01:03:27.432266 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.51s 2025-04-10 01:03:27.432277 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.26s 2025-04-10 01:03:27.432284 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.84s 2025-04-10 01:03:27.432295 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.50s 2025-04-10 01:03:30.468705 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.47s 2025-04-10 01:03:30.468828 | orchestrator | keystone : Creating default user role ----------------------------------- 3.38s 2025-04-10 01:03:30.468906 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.37s 2025-04-10 01:03:30.468922 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.95s 2025-04-10 01:03:30.469066 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.83s 2025-04-10 01:03:30.469091 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.76s 2025-04-10 01:03:30.469106 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.64s 2025-04-10 01:03:30.469121 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.46s 2025-04-10 01:03:30.469135 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.31s 2025-04-10 01:03:30.469150 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:30.469165 | orchestrator | 2025-04-10 01:03:27 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:30.469179 | orchestrator | 2025-04-10 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:30.469212 | orchestrator | 2025-04-10 01:03:30 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:30.469881 | orchestrator | 2025-04-10 01:03:30 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:30.469917 | orchestrator | 2025-04-10 01:03:30 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:30.470740 | orchestrator | 2025-04-10 01:03:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:30.472895 | orchestrator | 2025-04-10 01:03:30 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:30.473656 | orchestrator | 2025-04-10 01:03:30 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:30.473699 | orchestrator | 2025-04-10 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:33.517889 | orchestrator | 2025-04-10 01:03:33 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:33.523112 | orchestrator | 2025-04-10 01:03:33 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:33.523249 | orchestrator | 2025-04-10 01:03:33 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:33.524442 | orchestrator | 2025-04-10 01:03:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:33.525645 | orchestrator | 2025-04-10 01:03:33 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:33.528206 | orchestrator | 2025-04-10 01:03:33 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:36.596214 | orchestrator | 2025-04-10 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:36.596346 | orchestrator | 2025-04-10 01:03:36 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:36.597767 | orchestrator | 2025-04-10 01:03:36 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:36.600121 | orchestrator | 2025-04-10 01:03:36 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:36.600901 | orchestrator | 2025-04-10 01:03:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:36.602432 | orchestrator | 2025-04-10 01:03:36 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:36.603726 | orchestrator | 2025-04-10 01:03:36 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:36.604154 | orchestrator | 2025-04-10 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:39.645448 | orchestrator | 2025-04-10 01:03:39 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:39.647213 | orchestrator | 2025-04-10 01:03:39 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:39.649150 | orchestrator | 2025-04-10 01:03:39 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:39.651104 | orchestrator | 2025-04-10 01:03:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:39.652533 | orchestrator | 2025-04-10 01:03:39 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:39.654118 | orchestrator | 2025-04-10 01:03:39 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:42.695587 | orchestrator | 2025-04-10 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:42.695727 | orchestrator | 2025-04-10 01:03:42 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:42.696507 | orchestrator | 2025-04-10 01:03:42 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:42.699894 | orchestrator | 2025-04-10 01:03:42 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:42.700802 | orchestrator | 2025-04-10 01:03:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:42.702287 | orchestrator | 2025-04-10 01:03:42 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:42.703736 | orchestrator | 2025-04-10 01:03:42 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:45.752247 | orchestrator | 2025-04-10 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:45.752391 | orchestrator | 2025-04-10 01:03:45 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:45.753991 | orchestrator | 2025-04-10 01:03:45 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:45.755880 | orchestrator | 2025-04-10 01:03:45 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:45.759329 | orchestrator | 2025-04-10 01:03:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:45.762543 | orchestrator | 2025-04-10 01:03:45 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:45.764357 | orchestrator | 2025-04-10 01:03:45 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:48.821726 | orchestrator | 2025-04-10 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:48.821901 | orchestrator | 2025-04-10 01:03:48 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:48.824954 | orchestrator | 2025-04-10 01:03:48 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:48.825737 | orchestrator | 2025-04-10 01:03:48 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:48.827578 | orchestrator | 2025-04-10 01:03:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:48.829598 | orchestrator | 2025-04-10 01:03:48 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:48.831441 | orchestrator | 2025-04-10 01:03:48 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:51.890896 | orchestrator | 2025-04-10 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:51.891034 | orchestrator | 2025-04-10 01:03:51 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:51.891477 | orchestrator | 2025-04-10 01:03:51 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:51.891513 | orchestrator | 2025-04-10 01:03:51 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:51.892817 | orchestrator | 2025-04-10 01:03:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:51.894202 | orchestrator | 2025-04-10 01:03:51 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:51.896235 | orchestrator | 2025-04-10 01:03:51 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:51.896345 | orchestrator | 2025-04-10 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:54.946737 | orchestrator | 2025-04-10 01:03:54 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:54.947778 | orchestrator | 2025-04-10 01:03:54 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:54.949335 | orchestrator | 2025-04-10 01:03:54 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:54.949959 | orchestrator | 2025-04-10 01:03:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:54.951611 | orchestrator | 2025-04-10 01:03:54 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:58.011953 | orchestrator | 2025-04-10 01:03:54 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:03:58.012080 | orchestrator | 2025-04-10 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:03:58.012119 | orchestrator | 2025-04-10 01:03:58 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:03:58.017230 | orchestrator | 2025-04-10 01:03:58 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:03:58.018929 | orchestrator | 2025-04-10 01:03:58 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:03:58.020324 | orchestrator | 2025-04-10 01:03:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:03:58.022221 | orchestrator | 2025-04-10 01:03:58 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:03:58.024707 | orchestrator | 2025-04-10 01:03:58 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:01.070555 | orchestrator | 2025-04-10 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:01.070697 | orchestrator | 2025-04-10 01:04:01 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:01.071482 | orchestrator | 2025-04-10 01:04:01 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:01.071517 | orchestrator | 2025-04-10 01:04:01 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:04:01.072490 | orchestrator | 2025-04-10 01:04:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:01.073127 | orchestrator | 2025-04-10 01:04:01 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:04:01.074190 | orchestrator | 2025-04-10 01:04:01 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:04.113836 | orchestrator | 2025-04-10 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:04.114080 | orchestrator | 2025-04-10 01:04:04 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:04.114781 | orchestrator | 2025-04-10 01:04:04 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:04.115173 | orchestrator | 2025-04-10 01:04:04 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:04:04.116530 | orchestrator | 2025-04-10 01:04:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:04.120461 | orchestrator | 2025-04-10 01:04:04 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:04:04.123033 | orchestrator | 2025-04-10 01:04:04 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:04.123499 | orchestrator | 2025-04-10 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:07.172080 | orchestrator | 2025-04-10 01:04:07 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:07.174152 | orchestrator | 2025-04-10 01:04:07 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:07.174191 | orchestrator | 2025-04-10 01:04:07 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state STARTED 2025-04-10 01:04:07.174206 | orchestrator | 2025-04-10 01:04:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:07.174300 | orchestrator | 2025-04-10 01:04:07 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state STARTED 2025-04-10 01:04:07.174506 | orchestrator | 2025-04-10 01:04:07 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:07.174533 | orchestrator | 2025-04-10 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:10.210468 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:10.215426 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:10.216625 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:10.216658 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:10.217389 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task 8c9508a2-a849-41af-9b15-ba6aacc3271c is in state SUCCESS 2025-04-10 01:04:10.219148 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:10.220003 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task 37eaa144-7ce8-4225-849e-0e4564228762 is in state SUCCESS 2025-04-10 01:04:10.220802 | orchestrator | 2025-04-10 01:04:10 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:10.220836 | orchestrator | 2025-04-10 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:13.262693 | orchestrator | 2025-04-10 01:04:13 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:13.263220 | orchestrator | 2025-04-10 01:04:13 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:13.263952 | orchestrator | 2025-04-10 01:04:13 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:13.264843 | orchestrator | 2025-04-10 01:04:13 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:13.265771 | orchestrator | 2025-04-10 01:04:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:13.267223 | orchestrator | 2025-04-10 01:04:13 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:16.337008 | orchestrator | 2025-04-10 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:16.337147 | orchestrator | 2025-04-10 01:04:16 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:16.338094 | orchestrator | 2025-04-10 01:04:16 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:16.345944 | orchestrator | 2025-04-10 01:04:16 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:16.352666 | orchestrator | 2025-04-10 01:04:16 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:16.354113 | orchestrator | 2025-04-10 01:04:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:16.358212 | orchestrator | 2025-04-10 01:04:16 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:19.393283 | orchestrator | 2025-04-10 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:19.393455 | orchestrator | 2025-04-10 01:04:19 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:19.394139 | orchestrator | 2025-04-10 01:04:19 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:19.394178 | orchestrator | 2025-04-10 01:04:19 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:19.395645 | orchestrator | 2025-04-10 01:04:19 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:19.396698 | orchestrator | 2025-04-10 01:04:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:19.396729 | orchestrator | 2025-04-10 01:04:19 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:22.476131 | orchestrator | 2025-04-10 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:22.476274 | orchestrator | 2025-04-10 01:04:22 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:22.477708 | orchestrator | 2025-04-10 01:04:22 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:22.478459 | orchestrator | 2025-04-10 01:04:22 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:22.479312 | orchestrator | 2025-04-10 01:04:22 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:22.480185 | orchestrator | 2025-04-10 01:04:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:22.481210 | orchestrator | 2025-04-10 01:04:22 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:25.534239 | orchestrator | 2025-04-10 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:25.534358 | orchestrator | 2025-04-10 01:04:25 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:25.535136 | orchestrator | 2025-04-10 01:04:25 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:25.536107 | orchestrator | 2025-04-10 01:04:25 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:25.538095 | orchestrator | 2025-04-10 01:04:25 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:25.538979 | orchestrator | 2025-04-10 01:04:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:25.539912 | orchestrator | 2025-04-10 01:04:25 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:28.578723 | orchestrator | 2025-04-10 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:28.579029 | orchestrator | 2025-04-10 01:04:28 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:28.579566 | orchestrator | 2025-04-10 01:04:28 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:28.579603 | orchestrator | 2025-04-10 01:04:28 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:28.588138 | orchestrator | 2025-04-10 01:04:28 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:31.610083 | orchestrator | 2025-04-10 01:04:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:31.611027 | orchestrator | 2025-04-10 01:04:28 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:31.611065 | orchestrator | 2025-04-10 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:31.611097 | orchestrator | 2025-04-10 01:04:31 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:31.612020 | orchestrator | 2025-04-10 01:04:31 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:31.612062 | orchestrator | 2025-04-10 01:04:31 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:31.614060 | orchestrator | 2025-04-10 01:04:31 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:31.614746 | orchestrator | 2025-04-10 01:04:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:31.616190 | orchestrator | 2025-04-10 01:04:31 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:34.647639 | orchestrator | 2025-04-10 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:34.647752 | orchestrator | 2025-04-10 01:04:34 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:34.648135 | orchestrator | 2025-04-10 01:04:34 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:34.648159 | orchestrator | 2025-04-10 01:04:34 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:34.648803 | orchestrator | 2025-04-10 01:04:34 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:34.650590 | orchestrator | 2025-04-10 01:04:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:37.676757 | orchestrator | 2025-04-10 01:04:34 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:37.677395 | orchestrator | 2025-04-10 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:37.677485 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:37.677664 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:37.677688 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:37.677708 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task b8f80853-ee4e-4893-a5c7-49d471e52b1e is in state STARTED 2025-04-10 01:04:37.678706 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:37.679228 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:37.680013 | orchestrator | 2025-04-10 01:04:37 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:40.715702 | orchestrator | 2025-04-10 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:40.715986 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:40.716724 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:40.716760 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:40.717141 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task b8f80853-ee4e-4893-a5c7-49d471e52b1e is in state STARTED 2025-04-10 01:04:40.717849 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:40.726210 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:40.726323 | orchestrator | 2025-04-10 01:04:40 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:40.726962 | orchestrator | 2025-04-10 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:43.780024 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:43.780657 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:43.780699 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:43.781215 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task b8f80853-ee4e-4893-a5c7-49d471e52b1e is in state STARTED 2025-04-10 01:04:43.781982 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state STARTED 2025-04-10 01:04:43.782569 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:43.783320 | orchestrator | 2025-04-10 01:04:43 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:46.823309 | orchestrator | 2025-04-10 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:46.823455 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:46.828393 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:46.830042 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:46.830204 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task b8f80853-ee4e-4893-a5c7-49d471e52b1e is in state SUCCESS 2025-04-10 01:04:46.832998 | orchestrator | 2025-04-10 01:04:46.833017 | orchestrator | 2025-04-10 01:04:46.833025 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-04-10 01:04:46.833050 | orchestrator | 2025-04-10 01:04:46.833058 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-04-10 01:04:46.833065 | orchestrator | Thursday 10 April 2025 01:03:15 +0000 (0:00:00.185) 0:00:00.185 ******** 2025-04-10 01:04:46.833072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-04-10 01:04:46.833092 | orchestrator | 2025-04-10 01:04:46.833100 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-04-10 01:04:46.833107 | orchestrator | Thursday 10 April 2025 01:03:15 +0000 (0:00:00.238) 0:00:00.423 ******** 2025-04-10 01:04:46.833114 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-04-10 01:04:46.833125 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-04-10 01:04:46.833133 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-04-10 01:04:46.833140 | orchestrator | 2025-04-10 01:04:46.833147 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-04-10 01:04:46.833154 | orchestrator | Thursday 10 April 2025 01:03:17 +0000 (0:00:01.276) 0:00:01.699 ******** 2025-04-10 01:04:46.833162 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-04-10 01:04:46.833169 | orchestrator | 2025-04-10 01:04:46.833176 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-04-10 01:04:46.833183 | orchestrator | Thursday 10 April 2025 01:03:18 +0000 (0:00:01.210) 0:00:02.910 ******** 2025-04-10 01:04:46.833190 | orchestrator | changed: [testbed-manager] 2025-04-10 01:04:46.833198 | orchestrator | 2025-04-10 01:04:46.833205 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-04-10 01:04:46.833212 | orchestrator | Thursday 10 April 2025 01:03:19 +0000 (0:00:00.907) 0:00:03.818 ******** 2025-04-10 01:04:46.833219 | orchestrator | changed: [testbed-manager] 2025-04-10 01:04:46.833226 | orchestrator | 2025-04-10 01:04:46.833233 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-04-10 01:04:46.833240 | orchestrator | Thursday 10 April 2025 01:03:20 +0000 (0:00:01.057) 0:00:04.876 ******** 2025-04-10 01:04:46.833247 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-04-10 01:04:46.833254 | orchestrator | ok: [testbed-manager] 2025-04-10 01:04:46.833261 | orchestrator | 2025-04-10 01:04:46.833268 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-04-10 01:04:46.833275 | orchestrator | Thursday 10 April 2025 01:03:57 +0000 (0:00:37.329) 0:00:42.205 ******** 2025-04-10 01:04:46.833282 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-04-10 01:04:46.833290 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-04-10 01:04:46.833300 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-04-10 01:04:46.833307 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-04-10 01:04:46.833314 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-04-10 01:04:46.833322 | orchestrator | 2025-04-10 01:04:46.833329 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-04-10 01:04:46.833336 | orchestrator | Thursday 10 April 2025 01:04:01 +0000 (0:00:04.062) 0:00:46.267 ******** 2025-04-10 01:04:46.833343 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-04-10 01:04:46.833350 | orchestrator | 2025-04-10 01:04:46.833357 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-04-10 01:04:46.833364 | orchestrator | Thursday 10 April 2025 01:04:02 +0000 (0:00:00.451) 0:00:46.719 ******** 2025-04-10 01:04:46.833371 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:04:46.833378 | orchestrator | 2025-04-10 01:04:46.833385 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-04-10 01:04:46.833392 | orchestrator | Thursday 10 April 2025 01:04:02 +0000 (0:00:00.143) 0:00:46.862 ******** 2025-04-10 01:04:46.833404 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:04:46.833411 | orchestrator | 2025-04-10 01:04:46.833418 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-04-10 01:04:46.833425 | orchestrator | Thursday 10 April 2025 01:04:02 +0000 (0:00:00.302) 0:00:47.165 ******** 2025-04-10 01:04:46.833432 | orchestrator | changed: [testbed-manager] 2025-04-10 01:04:46.833439 | orchestrator | 2025-04-10 01:04:46.833446 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-04-10 01:04:46.833454 | orchestrator | Thursday 10 April 2025 01:04:04 +0000 (0:00:01.518) 0:00:48.683 ******** 2025-04-10 01:04:46.833461 | orchestrator | changed: [testbed-manager] 2025-04-10 01:04:46.833468 | orchestrator | 2025-04-10 01:04:46.833475 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-04-10 01:04:46.833485 | orchestrator | Thursday 10 April 2025 01:04:05 +0000 (0:00:01.294) 0:00:49.977 ******** 2025-04-10 01:04:46.833492 | orchestrator | changed: [testbed-manager] 2025-04-10 01:04:46.833499 | orchestrator | 2025-04-10 01:04:46.833506 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-04-10 01:04:46.833513 | orchestrator | Thursday 10 April 2025 01:04:06 +0000 (0:00:00.599) 0:00:50.577 ******** 2025-04-10 01:04:46.833520 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-04-10 01:04:46.833527 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-04-10 01:04:46.833534 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-04-10 01:04:46.833541 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-04-10 01:04:46.833548 | orchestrator | 2025-04-10 01:04:46.833555 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:04:46.833562 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-10 01:04:46.833570 | orchestrator | 2025-04-10 01:04:46.833584 | orchestrator | Thursday 10 April 2025 01:04:07 +0000 (0:00:01.533) 0:00:52.111 ******** 2025-04-10 01:04:46.833592 | orchestrator | =============================================================================== 2025-04-10 01:04:46.833599 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.33s 2025-04-10 01:04:46.833606 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.06s 2025-04-10 01:04:46.833613 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-04-10 01:04:46.833620 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.52s 2025-04-10 01:04:46.833626 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.29s 2025-04-10 01:04:46.833633 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2025-04-10 01:04:46.833641 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.21s 2025-04-10 01:04:46.833649 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.06s 2025-04-10 01:04:46.833657 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2025-04-10 01:04:46.833665 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-04-10 01:04:46.833672 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-04-10 01:04:46.833680 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-04-10 01:04:46.833687 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-04-10 01:04:46.833695 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-04-10 01:04:46.833703 | orchestrator | 2025-04-10 01:04:46.833710 | orchestrator | 2025-04-10 01:04:46.833718 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-04-10 01:04:46.833726 | orchestrator | 2025-04-10 01:04:46.833733 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-04-10 01:04:46.833741 | orchestrator | Thursday 10 April 2025 01:03:30 +0000 (0:00:00.331) 0:00:00.331 ******** 2025-04-10 01:04:46.833752 | orchestrator | changed: [localhost] 2025-04-10 01:04:46.833760 | orchestrator | 2025-04-10 01:04:46.833767 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-04-10 01:04:46.833775 | orchestrator | Thursday 10 April 2025 01:03:31 +0000 (0:00:00.832) 0:00:01.167 ******** 2025-04-10 01:04:46.833783 | orchestrator | changed: [localhost] 2025-04-10 01:04:46.833790 | orchestrator | 2025-04-10 01:04:46.833798 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-04-10 01:04:46.833806 | orchestrator | Thursday 10 April 2025 01:04:03 +0000 (0:00:32.207) 0:00:33.374 ******** 2025-04-10 01:04:46.833813 | orchestrator | changed: [localhost] 2025-04-10 01:04:46.833821 | orchestrator | 2025-04-10 01:04:46.833829 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:04:46.833836 | orchestrator | 2025-04-10 01:04:46.833844 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:04:46.833852 | orchestrator | Thursday 10 April 2025 01:04:07 +0000 (0:00:03.952) 0:00:37.327 ******** 2025-04-10 01:04:46.833860 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:04:46.833868 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:04:46.833891 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:04:46.833899 | orchestrator | 2025-04-10 01:04:46.833906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:04:46.833916 | orchestrator | Thursday 10 April 2025 01:04:07 +0000 (0:00:00.440) 0:00:37.768 ******** 2025-04-10 01:04:46.833924 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-04-10 01:04:46.833931 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-04-10 01:04:46.833939 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-04-10 01:04:46.833947 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-04-10 01:04:46.833954 | orchestrator | 2025-04-10 01:04:46.833961 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-04-10 01:04:46.833969 | orchestrator | skipping: no hosts matched 2025-04-10 01:04:46.833976 | orchestrator | 2025-04-10 01:04:46.833984 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:04:46.833991 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:04:46.833999 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:04:46.834006 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:04:46.834040 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:04:46.834049 | orchestrator | 2025-04-10 01:04:46.834056 | orchestrator | 2025-04-10 01:04:46.834062 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:04:46.834069 | orchestrator | Thursday 10 April 2025 01:04:08 +0000 (0:00:00.473) 0:00:38.241 ******** 2025-04-10 01:04:46.834076 | orchestrator | =============================================================================== 2025-04-10 01:04:46.834082 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.21s 2025-04-10 01:04:46.834089 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.95s 2025-04-10 01:04:46.834095 | orchestrator | Ensure the destination directory exists --------------------------------- 0.84s 2025-04-10 01:04:46.834102 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-04-10 01:04:46.834108 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-04-10 01:04:46.834115 | orchestrator | 2025-04-10 01:04:46.834121 | orchestrator | None 2025-04-10 01:04:46.834132 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task a09f8acd-4c31-46f6-9a43-d15ca1593af9 is in state SUCCESS 2025-04-10 01:04:49.869840 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:49.870012 | orchestrator | 2025-04-10 01:04:46 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:49.870094 | orchestrator | 2025-04-10 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:49.870129 | orchestrator | 2025-04-10 01:04:49 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:49.870681 | orchestrator | 2025-04-10 01:04:49 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:49.871255 | orchestrator | 2025-04-10 01:04:49 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:49.871364 | orchestrator | 2025-04-10 01:04:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:49.871805 | orchestrator | 2025-04-10 01:04:49 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:52.921644 | orchestrator | 2025-04-10 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:52.921817 | orchestrator | 2025-04-10 01:04:52 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:52.922453 | orchestrator | 2025-04-10 01:04:52 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:52.922490 | orchestrator | 2025-04-10 01:04:52 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:52.923603 | orchestrator | 2025-04-10 01:04:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:52.924173 | orchestrator | 2025-04-10 01:04:52 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:55.957394 | orchestrator | 2025-04-10 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:55.957700 | orchestrator | 2025-04-10 01:04:55 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:55.958570 | orchestrator | 2025-04-10 01:04:55 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:55.958630 | orchestrator | 2025-04-10 01:04:55 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:55.959463 | orchestrator | 2025-04-10 01:04:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:55.960222 | orchestrator | 2025-04-10 01:04:55 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:04:58.997042 | orchestrator | 2025-04-10 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:04:58.997292 | orchestrator | 2025-04-10 01:04:58 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:04:58.998159 | orchestrator | 2025-04-10 01:04:58 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:04:58.998198 | orchestrator | 2025-04-10 01:04:58 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:04:58.999396 | orchestrator | 2025-04-10 01:04:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:04:59.001191 | orchestrator | 2025-04-10 01:04:59 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:02.035287 | orchestrator | 2025-04-10 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:02.035427 | orchestrator | 2025-04-10 01:05:02 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:02.035635 | orchestrator | 2025-04-10 01:05:02 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:02.036230 | orchestrator | 2025-04-10 01:05:02 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:02.037166 | orchestrator | 2025-04-10 01:05:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:02.039945 | orchestrator | 2025-04-10 01:05:02 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:05.088026 | orchestrator | 2025-04-10 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:05.088171 | orchestrator | 2025-04-10 01:05:05 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:05.088490 | orchestrator | 2025-04-10 01:05:05 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:05.090149 | orchestrator | 2025-04-10 01:05:05 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:05.091027 | orchestrator | 2025-04-10 01:05:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:05.092022 | orchestrator | 2025-04-10 01:05:05 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:08.135465 | orchestrator | 2025-04-10 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:08.135633 | orchestrator | 2025-04-10 01:05:08 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:08.135974 | orchestrator | 2025-04-10 01:05:08 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:08.136421 | orchestrator | 2025-04-10 01:05:08 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:08.137180 | orchestrator | 2025-04-10 01:05:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:08.137678 | orchestrator | 2025-04-10 01:05:08 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:11.180436 | orchestrator | 2025-04-10 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:11.180574 | orchestrator | 2025-04-10 01:05:11 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:11.184243 | orchestrator | 2025-04-10 01:05:11 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:11.184313 | orchestrator | 2025-04-10 01:05:11 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:11.184599 | orchestrator | 2025-04-10 01:05:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:11.186540 | orchestrator | 2025-04-10 01:05:11 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:14.212060 | orchestrator | 2025-04-10 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:14.212205 | orchestrator | 2025-04-10 01:05:14 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:14.213046 | orchestrator | 2025-04-10 01:05:14 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:14.213082 | orchestrator | 2025-04-10 01:05:14 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:14.213733 | orchestrator | 2025-04-10 01:05:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:14.214627 | orchestrator | 2025-04-10 01:05:14 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:17.246561 | orchestrator | 2025-04-10 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:17.246723 | orchestrator | 2025-04-10 01:05:17 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:17.247087 | orchestrator | 2025-04-10 01:05:17 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:17.250361 | orchestrator | 2025-04-10 01:05:17 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:17.251056 | orchestrator | 2025-04-10 01:05:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:17.252003 | orchestrator | 2025-04-10 01:05:17 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:20.296243 | orchestrator | 2025-04-10 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:20.296394 | orchestrator | 2025-04-10 01:05:20 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:20.297070 | orchestrator | 2025-04-10 01:05:20 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:20.298303 | orchestrator | 2025-04-10 01:05:20 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:20.299000 | orchestrator | 2025-04-10 01:05:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:20.300015 | orchestrator | 2025-04-10 01:05:20 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:23.350228 | orchestrator | 2025-04-10 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:23.350402 | orchestrator | 2025-04-10 01:05:23 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:23.351241 | orchestrator | 2025-04-10 01:05:23 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:23.351277 | orchestrator | 2025-04-10 01:05:23 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:23.352060 | orchestrator | 2025-04-10 01:05:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:23.352749 | orchestrator | 2025-04-10 01:05:23 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:26.391006 | orchestrator | 2025-04-10 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:26.391248 | orchestrator | 2025-04-10 01:05:26 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:26.392142 | orchestrator | 2025-04-10 01:05:26 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:26.392199 | orchestrator | 2025-04-10 01:05:26 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:26.393059 | orchestrator | 2025-04-10 01:05:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:26.393674 | orchestrator | 2025-04-10 01:05:26 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:29.451305 | orchestrator | 2025-04-10 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:29.451450 | orchestrator | 2025-04-10 01:05:29 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:29.455143 | orchestrator | 2025-04-10 01:05:29 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:29.455183 | orchestrator | 2025-04-10 01:05:29 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:29.455590 | orchestrator | 2025-04-10 01:05:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:29.456082 | orchestrator | 2025-04-10 01:05:29 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:32.488370 | orchestrator | 2025-04-10 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:32.488505 | orchestrator | 2025-04-10 01:05:32 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:32.488854 | orchestrator | 2025-04-10 01:05:32 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:32.490230 | orchestrator | 2025-04-10 01:05:32 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:32.490643 | orchestrator | 2025-04-10 01:05:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:32.491477 | orchestrator | 2025-04-10 01:05:32 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:32.491714 | orchestrator | 2025-04-10 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:35.534314 | orchestrator | 2025-04-10 01:05:35 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:35.535619 | orchestrator | 2025-04-10 01:05:35 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:35.536453 | orchestrator | 2025-04-10 01:05:35 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:35.537328 | orchestrator | 2025-04-10 01:05:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:35.539317 | orchestrator | 2025-04-10 01:05:35 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:35.539354 | orchestrator | 2025-04-10 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:38.571082 | orchestrator | 2025-04-10 01:05:38 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:38.572067 | orchestrator | 2025-04-10 01:05:38 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:38.572493 | orchestrator | 2025-04-10 01:05:38 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:38.573081 | orchestrator | 2025-04-10 01:05:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:38.574972 | orchestrator | 2025-04-10 01:05:38 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:41.608855 | orchestrator | 2025-04-10 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:41.609761 | orchestrator | 2025-04-10 01:05:41 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:41.610840 | orchestrator | 2025-04-10 01:05:41 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state STARTED 2025-04-10 01:05:41.610879 | orchestrator | 2025-04-10 01:05:41 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:41.611862 | orchestrator | 2025-04-10 01:05:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:41.612859 | orchestrator | 2025-04-10 01:05:41 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:44.663034 | orchestrator | 2025-04-10 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:44.663172 | orchestrator | 2025-04-10 01:05:44 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:44.664090 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-10 01:05:44.664137 | orchestrator | 2025-04-10 01:05:44.664179 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-04-10 01:05:44.664193 | orchestrator | 2025-04-10 01:05:44.664208 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-04-10 01:05:44.664222 | orchestrator | Thursday 10 April 2025 01:04:11 +0000 (0:00:00.492) 0:00:00.492 ******** 2025-04-10 01:05:44.664237 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664253 | orchestrator | 2025-04-10 01:05:44.664267 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-04-10 01:05:44.664281 | orchestrator | Thursday 10 April 2025 01:04:14 +0000 (0:00:02.310) 0:00:02.802 ******** 2025-04-10 01:05:44.664295 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664309 | orchestrator | 2025-04-10 01:05:44.664323 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-04-10 01:05:44.664337 | orchestrator | Thursday 10 April 2025 01:04:15 +0000 (0:00:01.011) 0:00:03.814 ******** 2025-04-10 01:05:44.664351 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664365 | orchestrator | 2025-04-10 01:05:44.664379 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-04-10 01:05:44.664393 | orchestrator | Thursday 10 April 2025 01:04:16 +0000 (0:00:01.035) 0:00:04.849 ******** 2025-04-10 01:05:44.664406 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664420 | orchestrator | 2025-04-10 01:05:44.664434 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-04-10 01:05:44.664448 | orchestrator | Thursday 10 April 2025 01:04:17 +0000 (0:00:01.219) 0:00:06.069 ******** 2025-04-10 01:05:44.664462 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664476 | orchestrator | 2025-04-10 01:05:44.664490 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-04-10 01:05:44.664503 | orchestrator | Thursday 10 April 2025 01:04:18 +0000 (0:00:01.057) 0:00:07.126 ******** 2025-04-10 01:05:44.664517 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664531 | orchestrator | 2025-04-10 01:05:44.664545 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-04-10 01:05:44.664559 | orchestrator | Thursday 10 April 2025 01:04:19 +0000 (0:00:01.224) 0:00:08.350 ******** 2025-04-10 01:05:44.664573 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664587 | orchestrator | 2025-04-10 01:05:44.664601 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-04-10 01:05:44.664614 | orchestrator | Thursday 10 April 2025 01:04:21 +0000 (0:00:02.306) 0:00:10.657 ******** 2025-04-10 01:05:44.664628 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664642 | orchestrator | 2025-04-10 01:05:44.664656 | orchestrator | TASK [Create admin user] ******************************************************* 2025-04-10 01:05:44.664670 | orchestrator | Thursday 10 April 2025 01:04:23 +0000 (0:00:01.297) 0:00:11.955 ******** 2025-04-10 01:05:44.664684 | orchestrator | changed: [testbed-manager] 2025-04-10 01:05:44.664698 | orchestrator | 2025-04-10 01:05:44.664728 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-04-10 01:05:44.664742 | orchestrator | Thursday 10 April 2025 01:04:38 +0000 (0:00:15.454) 0:00:27.409 ******** 2025-04-10 01:05:44.664756 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:05:44.664770 | orchestrator | 2025-04-10 01:05:44.664784 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-10 01:05:44.664798 | orchestrator | 2025-04-10 01:05:44.664812 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-10 01:05:44.664826 | orchestrator | Thursday 10 April 2025 01:04:39 +0000 (0:00:00.647) 0:00:28.057 ******** 2025-04-10 01:05:44.664840 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:05:44.664853 | orchestrator | 2025-04-10 01:05:44.664867 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-10 01:05:44.664881 | orchestrator | 2025-04-10 01:05:44.664895 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-10 01:05:44.664938 | orchestrator | Thursday 10 April 2025 01:04:41 +0000 (0:00:01.923) 0:00:29.981 ******** 2025-04-10 01:05:44.664962 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:05:44.664976 | orchestrator | 2025-04-10 01:05:44.664990 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-10 01:05:44.665004 | orchestrator | 2025-04-10 01:05:44.665018 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-10 01:05:44.665032 | orchestrator | Thursday 10 April 2025 01:04:42 +0000 (0:00:01.630) 0:00:31.612 ******** 2025-04-10 01:05:44.665046 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:05:44.665061 | orchestrator | 2025-04-10 01:05:44.665074 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:05:44.665089 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-10 01:05:44.665105 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:05:44.665119 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:05:44.665133 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:05:44.665147 | orchestrator | 2025-04-10 01:05:44.665162 | orchestrator | 2025-04-10 01:05:44.665176 | orchestrator | 2025-04-10 01:05:44.665190 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:05:44.665204 | orchestrator | Thursday 10 April 2025 01:04:44 +0000 (0:00:01.470) 0:00:33.082 ******** 2025-04-10 01:05:44.665218 | orchestrator | =============================================================================== 2025-04-10 01:05:44.665232 | orchestrator | Create admin user ------------------------------------------------------ 15.45s 2025-04-10 01:05:44.665257 | orchestrator | Restart ceph manager service -------------------------------------------- 5.03s 2025-04-10 01:05:44.665272 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.31s 2025-04-10 01:05:44.665286 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.31s 2025-04-10 01:05:44.665300 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.30s 2025-04-10 01:05:44.665314 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.22s 2025-04-10 01:05:44.665327 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.22s 2025-04-10 01:05:44.665342 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-04-10 01:05:44.665356 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2025-04-10 01:05:44.665370 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-04-10 01:05:44.665383 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.65s 2025-04-10 01:05:44.665397 | orchestrator | 2025-04-10 01:05:44.665411 | orchestrator | 2025-04-10 01:05:44.665425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:05:44.665439 | orchestrator | 2025-04-10 01:05:44.665453 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:05:44.665467 | orchestrator | Thursday 10 April 2025 01:04:16 +0000 (0:00:01.178) 0:00:01.178 ******** 2025-04-10 01:05:44.665481 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:05:44.665496 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:05:44.665510 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:05:44.665524 | orchestrator | 2025-04-10 01:05:44.665538 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:05:44.665551 | orchestrator | Thursday 10 April 2025 01:04:17 +0000 (0:00:00.644) 0:00:01.822 ******** 2025-04-10 01:05:44.665565 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-04-10 01:05:44.665579 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-04-10 01:05:44.665604 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-04-10 01:05:44.665618 | orchestrator | 2025-04-10 01:05:44.665632 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-04-10 01:05:44.665645 | orchestrator | 2025-04-10 01:05:44.665665 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-10 01:05:44.665679 | orchestrator | Thursday 10 April 2025 01:04:18 +0000 (0:00:00.720) 0:00:02.543 ******** 2025-04-10 01:05:44.665693 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:05:44.665709 | orchestrator | 2025-04-10 01:05:44.665722 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-04-10 01:05:44.665736 | orchestrator | Thursday 10 April 2025 01:04:19 +0000 (0:00:01.411) 0:00:03.954 ******** 2025-04-10 01:05:44.665749 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-04-10 01:05:44.665763 | orchestrator | 2025-04-10 01:05:44.665777 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-04-10 01:05:44.665791 | orchestrator | Thursday 10 April 2025 01:04:23 +0000 (0:00:04.099) 0:00:08.053 ******** 2025-04-10 01:05:44.665805 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-04-10 01:05:44.665819 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-04-10 01:05:44.665833 | orchestrator | 2025-04-10 01:05:44.665847 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-04-10 01:05:44.665861 | orchestrator | Thursday 10 April 2025 01:04:30 +0000 (0:00:07.133) 0:00:15.187 ******** 2025-04-10 01:05:44.665875 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:05:44.665889 | orchestrator | 2025-04-10 01:05:44.665927 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-04-10 01:05:44.665943 | orchestrator | Thursday 10 April 2025 01:04:34 +0000 (0:00:03.962) 0:00:19.149 ******** 2025-04-10 01:05:44.665957 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:05:44.665971 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-04-10 01:05:44.665985 | orchestrator | 2025-04-10 01:05:44.665999 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-04-10 01:05:44.666013 | orchestrator | Thursday 10 April 2025 01:04:38 +0000 (0:00:04.250) 0:00:23.399 ******** 2025-04-10 01:05:44.666082 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:05:44.666096 | orchestrator | 2025-04-10 01:05:44.666110 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-04-10 01:05:44.666125 | orchestrator | Thursday 10 April 2025 01:04:42 +0000 (0:00:03.820) 0:00:27.219 ******** 2025-04-10 01:05:44.666139 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-04-10 01:05:44.666152 | orchestrator | 2025-04-10 01:05:44.666166 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-10 01:05:44.666180 | orchestrator | Thursday 10 April 2025 01:04:47 +0000 (0:00:04.581) 0:00:31.801 ******** 2025-04-10 01:05:44.666194 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:05:44.666208 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:05:44.666222 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:05:44.666238 | orchestrator | 2025-04-10 01:05:44.666260 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-04-10 01:05:44.666284 | orchestrator | Thursday 10 April 2025 01:04:47 +0000 (0:00:00.431) 0:00:32.233 ******** 2025-04-10 01:05:44.666324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.666518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.666537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.666552 | orchestrator | 2025-04-10 01:05:44.666567 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-04-10 01:05:44.666581 | orchestrator | Thursday 10 April 2025 01:04:49 +0000 (0:00:02.264) 0:00:34.497 ******** 2025-04-10 01:05:44.666595 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:05:44.666609 | orchestrator | 2025-04-10 01:05:44.666623 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-04-10 01:05:44.666638 | orchestrator | Thursday 10 April 2025 01:04:50 +0000 (0:00:00.470) 0:00:34.968 ******** 2025-04-10 01:05:44.666651 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:05:44.666666 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:05:44.666680 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:05:44.666694 | orchestrator | 2025-04-10 01:05:44.666708 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-10 01:05:44.666722 | orchestrator | Thursday 10 April 2025 01:04:51 +0000 (0:00:01.182) 0:00:36.150 ******** 2025-04-10 01:05:44.666736 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:05:44.666751 | orchestrator | 2025-04-10 01:05:44.666764 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-04-10 01:05:44.666778 | orchestrator | Thursday 10 April 2025 01:04:53 +0000 (0:00:01.375) 0:00:37.525 ******** 2025-04-10 01:05:44.666804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.666827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.666843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.666861 | orchestrator | 2025-04-10 01:05:44.666886 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-04-10 01:05:44.666991 | orchestrator | Thursday 10 April 2025 01:04:56 +0000 (0:00:02.991) 0:00:40.517 ******** 2025-04-10 01:05:44.667195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667215 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:05:44.667232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667271 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:05:44.667288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667303 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:05:44.667317 | orchestrator | 2025-04-10 01:05:44.667332 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-04-10 01:05:44.667346 | orchestrator | Thursday 10 April 2025 01:04:58 +0000 (0:00:02.329) 0:00:42.846 ******** 2025-04-10 01:05:44.667360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667375 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:05:44.667389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667404 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:05:44.667418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667439 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:05:44.667453 | orchestrator | 2025-04-10 01:05:44.667473 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-04-10 01:05:44.667488 | orchestrator | Thursday 10 April 2025 01:05:00 +0000 (0:00:02.658) 0:00:45.504 ******** 2025-04-10 01:05:44.667500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.667514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.667527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.667540 | orchestrator | 2025-04-10 01:05:44.667561 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-04-10 01:05:44.667579 | orchestrator | Thursday 10 April 2025 01:05:03 +0000 (0:00:02.352) 0:00:47.857 ******** 2025-04-10 01:05:44.667592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.667613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.667627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.667640 | orchestrator | 2025-04-10 01:05:44.667653 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-04-10 01:05:44.667665 | orchestrator | Thursday 10 April 2025 01:05:07 +0000 (0:00:04.217) 0:00:52.075 ******** 2025-04-10 01:05:44.667678 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-10 01:05:44.667691 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-10 01:05:44.667703 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-10 01:05:44.667715 | orchestrator | 2025-04-10 01:05:44.667728 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-04-10 01:05:44.667741 | orchestrator | Thursday 10 April 2025 01:05:10 +0000 (0:00:02.708) 0:00:54.783 ******** 2025-04-10 01:05:44.667753 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:05:44.667765 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:05:44.667777 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:05:44.667797 | orchestrator | 2025-04-10 01:05:44.667810 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-04-10 01:05:44.667822 | orchestrator | Thursday 10 April 2025 01:05:12 +0000 (0:00:02.293) 0:00:57.076 ******** 2025-04-10 01:05:44.667835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667848 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:05:44.667869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667883 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:05:44.667895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-10 01:05:44.667940 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:05:44.667959 | orchestrator | 2025-04-10 01:05:44.667971 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-04-10 01:05:44.667984 | orchestrator | Thursday 10 April 2025 01:05:13 +0000 (0:00:00.760) 0:00:57.836 ******** 2025-04-10 01:05:44.667996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.668061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.668085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-10 01:05:44.668099 | orchestrator | 2025-04-10 01:05:44.668111 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-04-10 01:05:44.668124 | orchestrator | Thursday 10 April 2025 01:05:14 +0000 (0:00:01.385) 0:00:59.222 ******** 2025-04-10 01:05:44.668136 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:05:44.668149 | orchestrator | 2025-04-10 01:05:44.668161 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-04-10 01:05:44.668174 | orchestrator | Thursday 10 April 2025 01:05:18 +0000 (0:00:03.557) 0:01:02.779 ******** 2025-04-10 01:05:44.668186 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:05:44.668198 | orchestrator | 2025-04-10 01:05:44.668211 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-04-10 01:05:44.668223 | orchestrator | Thursday 10 April 2025 01:05:20 +0000 (0:00:02.736) 0:01:05.515 ******** 2025-04-10 01:05:44.668235 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:05:44.668247 | orchestrator | 2025-04-10 01:05:44.668260 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-10 01:05:44.668272 | orchestrator | Thursday 10 April 2025 01:05:33 +0000 (0:00:12.715) 0:01:18.231 ******** 2025-04-10 01:05:44.668284 | orchestrator | 2025-04-10 01:05:44.668297 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-10 01:05:44.668309 | orchestrator | Thursday 10 April 2025 01:05:33 +0000 (0:00:00.065) 0:01:18.297 ******** 2025-04-10 01:05:44.668321 | orchestrator | 2025-04-10 01:05:44.668333 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-10 01:05:44.668346 | orchestrator | Thursday 10 April 2025 01:05:33 +0000 (0:00:00.210) 0:01:18.507 ******** 2025-04-10 01:05:44.668358 | orchestrator | 2025-04-10 01:05:44.668370 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-04-10 01:05:44.668390 | orchestrator | Thursday 10 April 2025 01:05:34 +0000 (0:00:00.067) 0:01:18.574 ******** 2025-04-10 01:05:44.668403 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:05:44.668415 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:05:44.668428 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:05:44.668440 | orchestrator | 2025-04-10 01:05:44.668452 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:05:44.668465 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 01:05:44.668478 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 01:05:44.668490 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-10 01:05:44.668503 | orchestrator | 2025-04-10 01:05:44.668515 | orchestrator | 2025-04-10 01:05:44.668527 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:05:44.668540 | orchestrator | Thursday 10 April 2025 01:05:44 +0000 (0:00:10.020) 0:01:28.595 ******** 2025-04-10 01:05:44.668552 | orchestrator | =============================================================================== 2025-04-10 01:05:44.668564 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.71s 2025-04-10 01:05:44.668577 | orchestrator | placement : Restart placement-api container ---------------------------- 10.02s 2025-04-10 01:05:44.668589 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.13s 2025-04-10 01:05:44.668606 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.58s 2025-04-10 01:05:44.668618 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.25s 2025-04-10 01:05:44.668631 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.22s 2025-04-10 01:05:44.668643 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.10s 2025-04-10 01:05:44.668655 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.96s 2025-04-10 01:05:44.668667 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.82s 2025-04-10 01:05:44.668680 | orchestrator | placement : Creating placement databases -------------------------------- 3.56s 2025-04-10 01:05:44.668692 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.99s 2025-04-10 01:05:44.668704 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.74s 2025-04-10 01:05:44.668716 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.70s 2025-04-10 01:05:44.668728 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.66s 2025-04-10 01:05:44.668741 | orchestrator | placement : Copying over config.json files for services ----------------- 2.35s 2025-04-10 01:05:44.668753 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 2.33s 2025-04-10 01:05:44.668765 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.29s 2025-04-10 01:05:44.668777 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.26s 2025-04-10 01:05:44.668790 | orchestrator | placement : include_tasks ----------------------------------------------- 1.41s 2025-04-10 01:05:44.668802 | orchestrator | placement : Check placement containers ---------------------------------- 1.39s 2025-04-10 01:05:44.668814 | orchestrator | 2025-04-10 01:05:44 | INFO  | Task da1b55f4-7a42-48b3-8170-9e7d5f011200 is in state SUCCESS 2025-04-10 01:05:44.668832 | orchestrator | 2025-04-10 01:05:44 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:47.717249 | orchestrator | 2025-04-10 01:05:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:47.717430 | orchestrator | 2025-04-10 01:05:44 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:47.717454 | orchestrator | 2025-04-10 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:47.717489 | orchestrator | 2025-04-10 01:05:47 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:47.718009 | orchestrator | 2025-04-10 01:05:47 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:47.718103 | orchestrator | 2025-04-10 01:05:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:47.718457 | orchestrator | 2025-04-10 01:05:47 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:47.719233 | orchestrator | 2025-04-10 01:05:47 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:05:50.746817 | orchestrator | 2025-04-10 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:50.747017 | orchestrator | 2025-04-10 01:05:50 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:50.747287 | orchestrator | 2025-04-10 01:05:50 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:50.747324 | orchestrator | 2025-04-10 01:05:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:50.747768 | orchestrator | 2025-04-10 01:05:50 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:50.748584 | orchestrator | 2025-04-10 01:05:50 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:05:53.775892 | orchestrator | 2025-04-10 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:53.776060 | orchestrator | 2025-04-10 01:05:53 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:53.777404 | orchestrator | 2025-04-10 01:05:53 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:53.777720 | orchestrator | 2025-04-10 01:05:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:53.778352 | orchestrator | 2025-04-10 01:05:53 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:53.779110 | orchestrator | 2025-04-10 01:05:53 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:05:56.829971 | orchestrator | 2025-04-10 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:56.830164 | orchestrator | 2025-04-10 01:05:56 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:56.830491 | orchestrator | 2025-04-10 01:05:56 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:56.831595 | orchestrator | 2025-04-10 01:05:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:56.832510 | orchestrator | 2025-04-10 01:05:56 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:56.833351 | orchestrator | 2025-04-10 01:05:56 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:05:59.862597 | orchestrator | 2025-04-10 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:05:59.862734 | orchestrator | 2025-04-10 01:05:59 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:05:59.862965 | orchestrator | 2025-04-10 01:05:59 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:05:59.863904 | orchestrator | 2025-04-10 01:05:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:05:59.864956 | orchestrator | 2025-04-10 01:05:59 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state STARTED 2025-04-10 01:05:59.865715 | orchestrator | 2025-04-10 01:05:59 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:02.896777 | orchestrator | 2025-04-10 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:02.896902 | orchestrator | 2025-04-10 01:06:02 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:02.897401 | orchestrator | 2025-04-10 01:06:02 | INFO  | Task c936e552-db5c-473f-8ac4-55c7e183b905 is in state STARTED 2025-04-10 01:06:02.900993 | orchestrator | 2025-04-10 01:06:02 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:02.901565 | orchestrator | 2025-04-10 01:06:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:02.903192 | orchestrator | 2025-04-10 01:06:02 | INFO  | Task 1b783e40-7a45-4336-8da0-6092fe9b10f1 is in state SUCCESS 2025-04-10 01:06:02.904548 | orchestrator | 2025-04-10 01:06:02.904591 | orchestrator | 2025-04-10 01:06:02.904606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:06:02.904622 | orchestrator | 2025-04-10 01:06:02.904636 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:06:02.904651 | orchestrator | Thursday 10 April 2025 01:03:30 +0000 (0:00:00.627) 0:00:00.627 ******** 2025-04-10 01:06:02.904665 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:06:02.904681 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:06:02.904695 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:06:02.904709 | orchestrator | 2025-04-10 01:06:02.904723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:06:02.904824 | orchestrator | Thursday 10 April 2025 01:03:31 +0000 (0:00:00.806) 0:00:01.434 ******** 2025-04-10 01:06:02.904842 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-04-10 01:06:02.904857 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-04-10 01:06:02.904872 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-04-10 01:06:02.904887 | orchestrator | 2025-04-10 01:06:02.904902 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-04-10 01:06:02.905248 | orchestrator | 2025-04-10 01:06:02.905267 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-10 01:06:02.905281 | orchestrator | Thursday 10 April 2025 01:03:32 +0000 (0:00:00.621) 0:00:02.056 ******** 2025-04-10 01:06:02.905296 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:06:02.905312 | orchestrator | 2025-04-10 01:06:02.905326 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-04-10 01:06:02.905340 | orchestrator | Thursday 10 April 2025 01:03:33 +0000 (0:00:01.055) 0:00:03.112 ******** 2025-04-10 01:06:02.905355 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-04-10 01:06:02.905387 | orchestrator | 2025-04-10 01:06:02.905402 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-04-10 01:06:02.905416 | orchestrator | Thursday 10 April 2025 01:03:37 +0000 (0:00:03.782) 0:00:06.894 ******** 2025-04-10 01:06:02.905430 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-04-10 01:06:02.905444 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-04-10 01:06:02.905458 | orchestrator | 2025-04-10 01:06:02.905472 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-04-10 01:06:02.905486 | orchestrator | Thursday 10 April 2025 01:03:43 +0000 (0:00:06.703) 0:00:13.598 ******** 2025-04-10 01:06:02.905500 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:06:02.905537 | orchestrator | 2025-04-10 01:06:02.905551 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-04-10 01:06:02.905571 | orchestrator | Thursday 10 April 2025 01:03:47 +0000 (0:00:03.676) 0:00:17.274 ******** 2025-04-10 01:06:02.905587 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:06:02.905602 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-04-10 01:06:02.905617 | orchestrator | 2025-04-10 01:06:02.905632 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-04-10 01:06:02.905646 | orchestrator | Thursday 10 April 2025 01:03:51 +0000 (0:00:03.929) 0:00:21.203 ******** 2025-04-10 01:06:02.905661 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:06:02.905677 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-04-10 01:06:02.905691 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-04-10 01:06:02.905706 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-04-10 01:06:02.905721 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-04-10 01:06:02.905736 | orchestrator | 2025-04-10 01:06:02.905751 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-04-10 01:06:02.905766 | orchestrator | Thursday 10 April 2025 01:04:07 +0000 (0:00:16.065) 0:00:37.269 ******** 2025-04-10 01:06:02.905780 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-04-10 01:06:02.905795 | orchestrator | 2025-04-10 01:06:02.905809 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-04-10 01:06:02.905824 | orchestrator | Thursday 10 April 2025 01:04:12 +0000 (0:00:05.284) 0:00:42.554 ******** 2025-04-10 01:06:02.905841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.905874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.905894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.905941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.905958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.905975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906112 | orchestrator | 2025-04-10 01:06:02.906126 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-04-10 01:06:02.906141 | orchestrator | Thursday 10 April 2025 01:04:17 +0000 (0:00:04.490) 0:00:47.044 ******** 2025-04-10 01:06:02.906155 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-04-10 01:06:02.906169 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-04-10 01:06:02.906183 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-04-10 01:06:02.906196 | orchestrator | 2025-04-10 01:06:02.906210 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-04-10 01:06:02.906224 | orchestrator | Thursday 10 April 2025 01:04:19 +0000 (0:00:02.607) 0:00:49.651 ******** 2025-04-10 01:06:02.906238 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.906253 | orchestrator | 2025-04-10 01:06:02.906267 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-04-10 01:06:02.906281 | orchestrator | Thursday 10 April 2025 01:04:20 +0000 (0:00:00.314) 0:00:49.966 ******** 2025-04-10 01:06:02.906295 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.906309 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:06:02.906323 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:06:02.906337 | orchestrator | 2025-04-10 01:06:02.906351 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-10 01:06:02.906365 | orchestrator | Thursday 10 April 2025 01:04:21 +0000 (0:00:01.326) 0:00:51.292 ******** 2025-04-10 01:06:02.906379 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:06:02.906393 | orchestrator | 2025-04-10 01:06:02.906407 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-04-10 01:06:02.906421 | orchestrator | Thursday 10 April 2025 01:04:23 +0000 (0:00:02.118) 0:00:53.411 ******** 2025-04-10 01:06:02.906436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.906461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.906564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.906585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.906691 | orchestrator | 2025-04-10 01:06:02.906706 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-04-10 01:06:02.906720 | orchestrator | Thursday 10 April 2025 01:04:28 +0000 (0:00:04.594) 0:00:58.005 ******** 2025-04-10 01:06:02.906735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.906751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.906775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.906797 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.906813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.906828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.906843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.906857 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:06:02.906872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.906904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.906962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.906979 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:06:02.906993 | orchestrator | 2025-04-10 01:06:02.907007 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-04-10 01:06:02.907022 | orchestrator | Thursday 10 April 2025 01:04:29 +0000 (0:00:00.965) 0:00:58.971 ******** 2025-04-10 01:06:02.907037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.907052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.907067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.907081 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.907103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.907130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.907145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.907159 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:06:02.907174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.907189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.907206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.907229 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:06:02.907247 | orchestrator | 2025-04-10 01:06:02.907264 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-04-10 01:06:02.907285 | orchestrator | Thursday 10 April 2025 01:04:30 +0000 (0:00:01.368) 0:01:00.339 ******** 2025-04-10 01:06:02.907313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.907331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.907348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.907365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907477 | orchestrator | 2025-04-10 01:06:02.907491 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-04-10 01:06:02.907512 | orchestrator | Thursday 10 April 2025 01:04:34 +0000 (0:00:04.198) 0:01:04.537 ******** 2025-04-10 01:06:02.907526 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.907541 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:06:02.907555 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:06:02.907569 | orchestrator | 2025-04-10 01:06:02.907583 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-04-10 01:06:02.907598 | orchestrator | Thursday 10 April 2025 01:04:38 +0000 (0:00:03.369) 0:01:07.906 ******** 2025-04-10 01:06:02.907612 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:06:02.907626 | orchestrator | 2025-04-10 01:06:02.907640 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-04-10 01:06:02.907654 | orchestrator | Thursday 10 April 2025 01:04:40 +0000 (0:00:02.409) 0:01:10.316 ******** 2025-04-10 01:06:02.907668 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.907683 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:06:02.907697 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:06:02.907711 | orchestrator | 2025-04-10 01:06:02.907725 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-04-10 01:06:02.907739 | orchestrator | Thursday 10 April 2025 01:04:43 +0000 (0:00:02.388) 0:01:12.705 ******** 2025-04-10 01:06:02.907766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.907786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.907802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.907823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.907986 | orchestrator | 2025-04-10 01:06:02.908001 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-04-10 01:06:02.908020 | orchestrator | Thursday 10 April 2025 01:04:56 +0000 (0:00:13.944) 0:01:26.649 ******** 2025-04-10 01:06:02.908035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.908062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.908078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.908093 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.908106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.908125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.908139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.908151 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:06:02.908174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-10 01:06:02.908188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.908201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:06:02.908214 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:06:02.908227 | orchestrator | 2025-04-10 01:06:02.908240 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-04-10 01:06:02.908258 | orchestrator | Thursday 10 April 2025 01:04:59 +0000 (0:00:02.618) 0:01:29.267 ******** 2025-04-10 01:06:02.908271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.908289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.908309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-10 01:06:02.908322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.908335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.908354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.908374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.908388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.908408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:06:02.908422 | orchestrator | 2025-04-10 01:06:02.908434 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-10 01:06:02.908447 | orchestrator | Thursday 10 April 2025 01:05:04 +0000 (0:00:04.428) 0:01:33.695 ******** 2025-04-10 01:06:02.908460 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:06:02.908473 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:06:02.908485 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:06:02.908498 | orchestrator | 2025-04-10 01:06:02.908510 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-04-10 01:06:02.908522 | orchestrator | Thursday 10 April 2025 01:05:04 +0000 (0:00:00.636) 0:01:34.332 ******** 2025-04-10 01:06:02.908535 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.908547 | orchestrator | 2025-04-10 01:06:02.908560 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-04-10 01:06:02.908578 | orchestrator | Thursday 10 April 2025 01:05:07 +0000 (0:00:03.273) 0:01:37.605 ******** 2025-04-10 01:06:02.908591 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.908603 | orchestrator | 2025-04-10 01:06:02.908615 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-04-10 01:06:02.908628 | orchestrator | Thursday 10 April 2025 01:05:10 +0000 (0:00:02.718) 0:01:40.323 ******** 2025-04-10 01:06:02.908640 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.908652 | orchestrator | 2025-04-10 01:06:02.908665 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-10 01:06:02.908677 | orchestrator | Thursday 10 April 2025 01:05:21 +0000 (0:00:11.309) 0:01:51.633 ******** 2025-04-10 01:06:02.908690 | orchestrator | 2025-04-10 01:06:02.908702 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-10 01:06:02.908715 | orchestrator | Thursday 10 April 2025 01:05:22 +0000 (0:00:00.116) 0:01:51.750 ******** 2025-04-10 01:06:02.908727 | orchestrator | 2025-04-10 01:06:02.908739 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-10 01:06:02.908752 | orchestrator | Thursday 10 April 2025 01:05:22 +0000 (0:00:00.331) 0:01:52.082 ******** 2025-04-10 01:06:02.908764 | orchestrator | 2025-04-10 01:06:02.908777 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-04-10 01:06:02.908789 | orchestrator | Thursday 10 April 2025 01:05:22 +0000 (0:00:00.057) 0:01:52.139 ******** 2025-04-10 01:06:02.908802 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.908814 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:06:02.908827 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:06:02.908839 | orchestrator | 2025-04-10 01:06:02.908852 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-04-10 01:06:02.908864 | orchestrator | Thursday 10 April 2025 01:05:36 +0000 (0:00:13.654) 0:02:05.793 ******** 2025-04-10 01:06:02.908877 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:06:02.908889 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:06:02.908901 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.908931 | orchestrator | 2025-04-10 01:06:02.908944 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-04-10 01:06:02.908956 | orchestrator | Thursday 10 April 2025 01:05:44 +0000 (0:00:08.554) 0:02:14.348 ******** 2025-04-10 01:06:02.908969 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:06:02.908981 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:06:02.908993 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:06:02.909006 | orchestrator | 2025-04-10 01:06:02.909018 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:06:02.909031 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:06:02.909044 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 01:06:02.909057 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 01:06:02.909069 | orchestrator | 2025-04-10 01:06:02.909081 | orchestrator | 2025-04-10 01:06:02.909093 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:06:02.909106 | orchestrator | Thursday 10 April 2025 01:05:59 +0000 (0:00:14.457) 0:02:28.806 ******** 2025-04-10 01:06:02.909118 | orchestrator | =============================================================================== 2025-04-10 01:06:02.909131 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.07s 2025-04-10 01:06:02.909143 | orchestrator | barbican : Restart barbican-worker container --------------------------- 14.46s 2025-04-10 01:06:02.909155 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.94s 2025-04-10 01:06:02.909167 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.65s 2025-04-10 01:06:02.909185 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.31s 2025-04-10 01:06:02.909203 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.55s 2025-04-10 01:06:02.909216 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.70s 2025-04-10 01:06:02.909233 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.28s 2025-04-10 01:06:05.935688 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.59s 2025-04-10 01:06:05.935772 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 4.49s 2025-04-10 01:06:05.935780 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.43s 2025-04-10 01:06:05.935785 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.20s 2025-04-10 01:06:05.935791 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2025-04-10 01:06:05.935796 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.78s 2025-04-10 01:06:05.935801 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.68s 2025-04-10 01:06:05.935806 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.37s 2025-04-10 01:06:05.935811 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.27s 2025-04-10 01:06:05.935816 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.72s 2025-04-10 01:06:05.935821 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.62s 2025-04-10 01:06:05.935826 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.61s 2025-04-10 01:06:05.935831 | orchestrator | 2025-04-10 01:06:02 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:05.935837 | orchestrator | 2025-04-10 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:05.935852 | orchestrator | 2025-04-10 01:06:05 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:05.937748 | orchestrator | 2025-04-10 01:06:05 | INFO  | Task c936e552-db5c-473f-8ac4-55c7e183b905 is in state STARTED 2025-04-10 01:06:05.939252 | orchestrator | 2025-04-10 01:06:05 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:05.941099 | orchestrator | 2025-04-10 01:06:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:05.942580 | orchestrator | 2025-04-10 01:06:05 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:05.943058 | orchestrator | 2025-04-10 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:08.973849 | orchestrator | 2025-04-10 01:06:08 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:08.974383 | orchestrator | 2025-04-10 01:06:08 | INFO  | Task c936e552-db5c-473f-8ac4-55c7e183b905 is in state SUCCESS 2025-04-10 01:06:08.977600 | orchestrator | 2025-04-10 01:06:08 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:08.978431 | orchestrator | 2025-04-10 01:06:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:08.979408 | orchestrator | 2025-04-10 01:06:08 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:12.009734 | orchestrator | 2025-04-10 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:12.009881 | orchestrator | 2025-04-10 01:06:12 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:12.010206 | orchestrator | 2025-04-10 01:06:12 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:12.011254 | orchestrator | 2025-04-10 01:06:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:12.013585 | orchestrator | 2025-04-10 01:06:12 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:12.014706 | orchestrator | 2025-04-10 01:06:12 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:12.016714 | orchestrator | 2025-04-10 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:15.053005 | orchestrator | 2025-04-10 01:06:15 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:15.053651 | orchestrator | 2025-04-10 01:06:15 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:15.054672 | orchestrator | 2025-04-10 01:06:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:15.055650 | orchestrator | 2025-04-10 01:06:15 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:15.056617 | orchestrator | 2025-04-10 01:06:15 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:15.056804 | orchestrator | 2025-04-10 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:18.097246 | orchestrator | 2025-04-10 01:06:18 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:18.099209 | orchestrator | 2025-04-10 01:06:18 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:18.099304 | orchestrator | 2025-04-10 01:06:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:18.100790 | orchestrator | 2025-04-10 01:06:18 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:18.101800 | orchestrator | 2025-04-10 01:06:18 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:18.102418 | orchestrator | 2025-04-10 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:21.138513 | orchestrator | 2025-04-10 01:06:21 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:21.139102 | orchestrator | 2025-04-10 01:06:21 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:21.141447 | orchestrator | 2025-04-10 01:06:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:21.141797 | orchestrator | 2025-04-10 01:06:21 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:21.141825 | orchestrator | 2025-04-10 01:06:21 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:24.193583 | orchestrator | 2025-04-10 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:24.193682 | orchestrator | 2025-04-10 01:06:24 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:24.194705 | orchestrator | 2025-04-10 01:06:24 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:24.199420 | orchestrator | 2025-04-10 01:06:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:24.200914 | orchestrator | 2025-04-10 01:06:24 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:24.202477 | orchestrator | 2025-04-10 01:06:24 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:24.202896 | orchestrator | 2025-04-10 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:27.237246 | orchestrator | 2025-04-10 01:06:27 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:27.237993 | orchestrator | 2025-04-10 01:06:27 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:27.239180 | orchestrator | 2025-04-10 01:06:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:27.240047 | orchestrator | 2025-04-10 01:06:27 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:27.240967 | orchestrator | 2025-04-10 01:06:27 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:27.241178 | orchestrator | 2025-04-10 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:30.293219 | orchestrator | 2025-04-10 01:06:30 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:30.293898 | orchestrator | 2025-04-10 01:06:30 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:30.294275 | orchestrator | 2025-04-10 01:06:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:30.298095 | orchestrator | 2025-04-10 01:06:30 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:30.298414 | orchestrator | 2025-04-10 01:06:30 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:33.329105 | orchestrator | 2025-04-10 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:33.329382 | orchestrator | 2025-04-10 01:06:33 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:36.366674 | orchestrator | 2025-04-10 01:06:33 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:36.366799 | orchestrator | 2025-04-10 01:06:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:36.366819 | orchestrator | 2025-04-10 01:06:33 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:36.366835 | orchestrator | 2025-04-10 01:06:33 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:36.366850 | orchestrator | 2025-04-10 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:36.366881 | orchestrator | 2025-04-10 01:06:36 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:36.367117 | orchestrator | 2025-04-10 01:06:36 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:36.367148 | orchestrator | 2025-04-10 01:06:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:36.367730 | orchestrator | 2025-04-10 01:06:36 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:36.368364 | orchestrator | 2025-04-10 01:06:36 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:39.395398 | orchestrator | 2025-04-10 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:39.395541 | orchestrator | 2025-04-10 01:06:39 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:39.396161 | orchestrator | 2025-04-10 01:06:39 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:39.396174 | orchestrator | 2025-04-10 01:06:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:39.396725 | orchestrator | 2025-04-10 01:06:39 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:39.397483 | orchestrator | 2025-04-10 01:06:39 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:39.397542 | orchestrator | 2025-04-10 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:42.435030 | orchestrator | 2025-04-10 01:06:42 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:42.435266 | orchestrator | 2025-04-10 01:06:42 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:42.435758 | orchestrator | 2025-04-10 01:06:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:42.436922 | orchestrator | 2025-04-10 01:06:42 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:42.437471 | orchestrator | 2025-04-10 01:06:42 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:45.466237 | orchestrator | 2025-04-10 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:45.466377 | orchestrator | 2025-04-10 01:06:45 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:45.467376 | orchestrator | 2025-04-10 01:06:45 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:45.475671 | orchestrator | 2025-04-10 01:06:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:45.476115 | orchestrator | 2025-04-10 01:06:45 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:45.476959 | orchestrator | 2025-04-10 01:06:45 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:48.525773 | orchestrator | 2025-04-10 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:48.525923 | orchestrator | 2025-04-10 01:06:48 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:48.528286 | orchestrator | 2025-04-10 01:06:48 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:48.531089 | orchestrator | 2025-04-10 01:06:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:48.537236 | orchestrator | 2025-04-10 01:06:48 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:48.540278 | orchestrator | 2025-04-10 01:06:48 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:51.578393 | orchestrator | 2025-04-10 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:51.578644 | orchestrator | 2025-04-10 01:06:51 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:51.578680 | orchestrator | 2025-04-10 01:06:51 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:51.579229 | orchestrator | 2025-04-10 01:06:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:51.580564 | orchestrator | 2025-04-10 01:06:51 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:51.581318 | orchestrator | 2025-04-10 01:06:51 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:54.627426 | orchestrator | 2025-04-10 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:54.627569 | orchestrator | 2025-04-10 01:06:54 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:54.631760 | orchestrator | 2025-04-10 01:06:54 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:54.634076 | orchestrator | 2025-04-10 01:06:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:54.636050 | orchestrator | 2025-04-10 01:06:54 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:54.637625 | orchestrator | 2025-04-10 01:06:54 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:54.638054 | orchestrator | 2025-04-10 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:06:57.669457 | orchestrator | 2025-04-10 01:06:57 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:06:57.669878 | orchestrator | 2025-04-10 01:06:57 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:06:57.669896 | orchestrator | 2025-04-10 01:06:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:06:57.670884 | orchestrator | 2025-04-10 01:06:57 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:06:57.671346 | orchestrator | 2025-04-10 01:06:57 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:06:57.671430 | orchestrator | 2025-04-10 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:00.702272 | orchestrator | 2025-04-10 01:07:00 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:00.702521 | orchestrator | 2025-04-10 01:07:00 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:07:00.703082 | orchestrator | 2025-04-10 01:07:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:00.703840 | orchestrator | 2025-04-10 01:07:00 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:00.704686 | orchestrator | 2025-04-10 01:07:00 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:03.732307 | orchestrator | 2025-04-10 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:03.732544 | orchestrator | 2025-04-10 01:07:03 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:03.732982 | orchestrator | 2025-04-10 01:07:03 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:07:03.733017 | orchestrator | 2025-04-10 01:07:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:03.733563 | orchestrator | 2025-04-10 01:07:03 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:03.734155 | orchestrator | 2025-04-10 01:07:03 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:06.761233 | orchestrator | 2025-04-10 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:06.761376 | orchestrator | 2025-04-10 01:07:06 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:06.763044 | orchestrator | 2025-04-10 01:07:06 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state STARTED 2025-04-10 01:07:06.763102 | orchestrator | 2025-04-10 01:07:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:06.763482 | orchestrator | 2025-04-10 01:07:06 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:06.764161 | orchestrator | 2025-04-10 01:07:06 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:09.784776 | orchestrator | 2025-04-10 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:09.784917 | orchestrator | 2025-04-10 01:07:09 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:09.786085 | orchestrator | 2025-04-10 01:07:09 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:09.787218 | orchestrator | 2025-04-10 01:07:09 | INFO  | Task bc691ca5-11ac-4723-a0db-da451c7a30f7 is in state SUCCESS 2025-04-10 01:07:09.789269 | orchestrator | 2025-04-10 01:07:09.789307 | orchestrator | 2025-04-10 01:07:09.789322 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:07:09.789337 | orchestrator | 2025-04-10 01:07:09.789351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:07:09.789365 | orchestrator | Thursday 10 April 2025 01:06:05 +0000 (0:00:00.192) 0:00:00.192 ******** 2025-04-10 01:07:09.789787 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:07:09.789815 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:07:09.789830 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:07:09.789845 | orchestrator | 2025-04-10 01:07:09.789859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:07:09.789928 | orchestrator | Thursday 10 April 2025 01:06:06 +0000 (0:00:00.367) 0:00:00.560 ******** 2025-04-10 01:07:09.790377 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-10 01:07:09.790401 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-10 01:07:09.790484 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-10 01:07:09.790746 | orchestrator | 2025-04-10 01:07:09.790767 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-04-10 01:07:09.790782 | orchestrator | 2025-04-10 01:07:09.790851 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-04-10 01:07:09.791068 | orchestrator | Thursday 10 April 2025 01:06:06 +0000 (0:00:00.486) 0:00:01.047 ******** 2025-04-10 01:07:09.791090 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:07:09.791105 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:07:09.791119 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:07:09.791133 | orchestrator | 2025-04-10 01:07:09.791148 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:07:09.791197 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:07:09.791213 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:07:09.791242 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:07:09.791257 | orchestrator | 2025-04-10 01:07:09.791271 | orchestrator | 2025-04-10 01:07:09.791285 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:07:09.791300 | orchestrator | Thursday 10 April 2025 01:06:07 +0000 (0:00:00.916) 0:00:01.964 ******** 2025-04-10 01:07:09.791314 | orchestrator | =============================================================================== 2025-04-10 01:07:09.791327 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.92s 2025-04-10 01:07:09.791341 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-04-10 01:07:09.791355 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-04-10 01:07:09.791369 | orchestrator | 2025-04-10 01:07:09.791383 | orchestrator | 2025-04-10 01:07:09.791397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:07:09.791411 | orchestrator | 2025-04-10 01:07:09.791431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:07:09.791446 | orchestrator | Thursday 10 April 2025 01:03:30 +0000 (0:00:00.399) 0:00:00.399 ******** 2025-04-10 01:07:09.791460 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:07:09.791475 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:07:09.791499 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:07:09.791514 | orchestrator | 2025-04-10 01:07:09.791529 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:07:09.791561 | orchestrator | Thursday 10 April 2025 01:03:30 +0000 (0:00:00.617) 0:00:01.016 ******** 2025-04-10 01:07:09.791576 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-04-10 01:07:09.791590 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-04-10 01:07:09.791605 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-04-10 01:07:09.791619 | orchestrator | 2025-04-10 01:07:09.791633 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-04-10 01:07:09.791647 | orchestrator | 2025-04-10 01:07:09.791661 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-10 01:07:09.791675 | orchestrator | Thursday 10 April 2025 01:03:31 +0000 (0:00:00.396) 0:00:01.413 ******** 2025-04-10 01:07:09.791690 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:07:09.791704 | orchestrator | 2025-04-10 01:07:09.791722 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-04-10 01:07:09.791739 | orchestrator | Thursday 10 April 2025 01:03:32 +0000 (0:00:01.277) 0:00:02.691 ******** 2025-04-10 01:07:09.791755 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-04-10 01:07:09.791772 | orchestrator | 2025-04-10 01:07:09.791788 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-04-10 01:07:09.791804 | orchestrator | Thursday 10 April 2025 01:03:36 +0000 (0:00:03.911) 0:00:06.602 ******** 2025-04-10 01:07:09.791821 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-04-10 01:07:09.791837 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-04-10 01:07:09.791854 | orchestrator | 2025-04-10 01:07:09.791871 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-04-10 01:07:09.791887 | orchestrator | Thursday 10 April 2025 01:03:43 +0000 (0:00:06.708) 0:00:13.310 ******** 2025-04-10 01:07:09.791904 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-04-10 01:07:09.791920 | orchestrator | 2025-04-10 01:07:09.791983 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-04-10 01:07:09.792003 | orchestrator | Thursday 10 April 2025 01:03:46 +0000 (0:00:03.644) 0:00:16.955 ******** 2025-04-10 01:07:09.792070 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:07:09.792088 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-04-10 01:07:09.792103 | orchestrator | 2025-04-10 01:07:09.792117 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-04-10 01:07:09.792131 | orchestrator | Thursday 10 April 2025 01:03:50 +0000 (0:00:04.143) 0:00:21.099 ******** 2025-04-10 01:07:09.792147 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:07:09.792161 | orchestrator | 2025-04-10 01:07:09.792176 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-04-10 01:07:09.792190 | orchestrator | Thursday 10 April 2025 01:03:54 +0000 (0:00:03.362) 0:00:24.461 ******** 2025-04-10 01:07:09.792204 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-04-10 01:07:09.792218 | orchestrator | 2025-04-10 01:07:09.792232 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-04-10 01:07:09.792246 | orchestrator | Thursday 10 April 2025 01:03:58 +0000 (0:00:04.305) 0:00:28.766 ******** 2025-04-10 01:07:09.792263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.792293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.792309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.792325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.792676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.792715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.792730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.792744 | orchestrator | 2025-04-10 01:07:09.792759 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-04-10 01:07:09.792774 | orchestrator | Thursday 10 April 2025 01:04:01 +0000 (0:00:03.202) 0:00:31.968 ******** 2025-04-10 01:07:09.792788 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.792803 | orchestrator | 2025-04-10 01:07:09.792817 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-04-10 01:07:09.792832 | orchestrator | Thursday 10 April 2025 01:04:01 +0000 (0:00:00.142) 0:00:32.111 ******** 2025-04-10 01:07:09.792846 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.792860 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:07:09.792874 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:07:09.792888 | orchestrator | 2025-04-10 01:07:09.792902 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-10 01:07:09.792916 | orchestrator | Thursday 10 April 2025 01:04:02 +0000 (0:00:00.451) 0:00:32.563 ******** 2025-04-10 01:07:09.792930 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:07:09.792965 | orchestrator | 2025-04-10 01:07:09.792980 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-04-10 01:07:09.792994 | orchestrator | Thursday 10 April 2025 01:04:02 +0000 (0:00:00.646) 0:00:33.210 ******** 2025-04-10 01:07:09.793008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.793062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.793087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.793102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.793426 | orchestrator | 2025-04-10 01:07:09.793440 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-04-10 01:07:09.793455 | orchestrator | Thursday 10 April 2025 01:04:09 +0000 (0:00:06.664) 0:00:39.874 ******** 2025-04-10 01:07:09.793474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.793521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.793547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793640 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:07:09.793671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.793761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.793793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.793894 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.793929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.794213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.794239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794300 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:07:09.794315 | orchestrator | 2025-04-10 01:07:09.794330 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-04-10 01:07:09.794351 | orchestrator | Thursday 10 April 2025 01:04:11 +0000 (0:00:01.619) 0:00:41.493 ******** 2025-04-10 01:07:09.794371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.794424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.794442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794501 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.794521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.794548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.794592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794659 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:07:09.794673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.794694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.794739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.794806 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:07:09.794827 | orchestrator | 2025-04-10 01:07:09.794842 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-04-10 01:07:09.794856 | orchestrator | Thursday 10 April 2025 01:04:15 +0000 (0:00:04.683) 0:00:46.176 ******** 2025-04-10 01:07:09.794871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.794914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.794931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.794980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.795289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.795326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.795355 | orchestrator | 2025-04-10 01:07:09.795370 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-04-10 01:07:09.795384 | orchestrator | Thursday 10 April 2025 01:04:24 +0000 (0:00:08.304) 0:00:54.481 ******** 2025-04-10 01:07:09.795435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.795453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.795474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.795489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.795867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.795910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.795924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.795979 | orchestrator | 2025-04-10 01:07:09.796005 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-04-10 01:07:09.796020 | orchestrator | Thursday 10 April 2025 01:04:52 +0000 (0:00:28.049) 0:01:22.531 ******** 2025-04-10 01:07:09.796034 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-10 01:07:09.796049 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-10 01:07:09.796064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-10 01:07:09.796078 | orchestrator | 2025-04-10 01:07:09.796092 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-04-10 01:07:09.796107 | orchestrator | Thursday 10 April 2025 01:05:03 +0000 (0:00:10.939) 0:01:33.470 ******** 2025-04-10 01:07:09.796121 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-10 01:07:09.796135 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-10 01:07:09.796149 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-10 01:07:09.796163 | orchestrator | 2025-04-10 01:07:09.796177 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-04-10 01:07:09.796192 | orchestrator | Thursday 10 April 2025 01:05:08 +0000 (0:00:05.496) 0:01:38.967 ******** 2025-04-10 01:07:09.796206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.796232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.796255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.796277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796579 | orchestrator | 2025-04-10 01:07:09.796608 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-04-10 01:07:09.796634 | orchestrator | Thursday 10 April 2025 01:05:12 +0000 (0:00:04.033) 0:01:43.001 ******** 2025-04-10 01:07:09.796657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.796680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.796695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.796709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.796886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.796994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797114 | orchestrator | 2025-04-10 01:07:09.797129 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-10 01:07:09.797143 | orchestrator | Thursday 10 April 2025 01:05:15 +0000 (0:00:02.752) 0:01:45.753 ******** 2025-04-10 01:07:09.797157 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.797172 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:07:09.797186 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:07:09.797199 | orchestrator | 2025-04-10 01:07:09.797213 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-04-10 01:07:09.797227 | orchestrator | Thursday 10 April 2025 01:05:15 +0000 (0:00:00.484) 0:01:46.238 ******** 2025-04-10 01:07:09.797242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.797268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.797290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797387 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.797428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.797457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.797481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797565 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:07:09.797586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-10 01:07:09.797606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-10 01:07:09.797622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.797711 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:07:09.797726 | orchestrator | 2025-04-10 01:07:09.797740 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-04-10 01:07:09.797754 | orchestrator | Thursday 10 April 2025 01:05:16 +0000 (0:00:00.975) 0:01:47.214 ******** 2025-04-10 01:07:09.797775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.797791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.797806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-10 01:07:09.797830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.797998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.798200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.798230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-10 01:07:09.798251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-10 01:07:09.798266 | orchestrator | 2025-04-10 01:07:09.798280 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-10 01:07:09.798295 | orchestrator | Thursday 10 April 2025 01:05:23 +0000 (0:00:06.986) 0:01:54.201 ******** 2025-04-10 01:07:09.798309 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:07:09.798323 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:07:09.798337 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:07:09.798351 | orchestrator | 2025-04-10 01:07:09.798365 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-04-10 01:07:09.798380 | orchestrator | Thursday 10 April 2025 01:05:25 +0000 (0:00:01.231) 0:01:55.432 ******** 2025-04-10 01:07:09.798394 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-04-10 01:07:09.798408 | orchestrator | 2025-04-10 01:07:09.798422 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-04-10 01:07:09.798436 | orchestrator | Thursday 10 April 2025 01:05:27 +0000 (0:00:02.562) 0:01:57.997 ******** 2025-04-10 01:07:09.798450 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:07:09.798472 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-04-10 01:07:09.798486 | orchestrator | 2025-04-10 01:07:09.798500 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-04-10 01:07:09.798514 | orchestrator | Thursday 10 April 2025 01:05:30 +0000 (0:00:02.601) 0:02:00.598 ******** 2025-04-10 01:07:09.798528 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.798542 | orchestrator | 2025-04-10 01:07:09.798556 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-10 01:07:09.798570 | orchestrator | Thursday 10 April 2025 01:05:46 +0000 (0:00:16.341) 0:02:16.940 ******** 2025-04-10 01:07:09.798584 | orchestrator | 2025-04-10 01:07:09.798598 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-10 01:07:09.798612 | orchestrator | Thursday 10 April 2025 01:05:46 +0000 (0:00:00.240) 0:02:17.180 ******** 2025-04-10 01:07:09.798626 | orchestrator | 2025-04-10 01:07:09.798639 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-10 01:07:09.798658 | orchestrator | Thursday 10 April 2025 01:05:47 +0000 (0:00:00.295) 0:02:17.476 ******** 2025-04-10 01:07:09.798673 | orchestrator | 2025-04-10 01:07:09.798687 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-04-10 01:07:09.798701 | orchestrator | Thursday 10 April 2025 01:05:47 +0000 (0:00:00.217) 0:02:17.693 ******** 2025-04-10 01:07:09.798715 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.798729 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:07:09.798743 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:07:09.798757 | orchestrator | 2025-04-10 01:07:09.798771 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-04-10 01:07:09.798785 | orchestrator | Thursday 10 April 2025 01:05:58 +0000 (0:00:10.812) 0:02:28.506 ******** 2025-04-10 01:07:09.798799 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:07:09.798813 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:07:09.798827 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.798842 | orchestrator | 2025-04-10 01:07:09.798856 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-04-10 01:07:09.798870 | orchestrator | Thursday 10 April 2025 01:06:11 +0000 (0:00:13.089) 0:02:41.596 ******** 2025-04-10 01:07:09.798884 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.798897 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:07:09.798912 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:07:09.798926 | orchestrator | 2025-04-10 01:07:09.798964 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-04-10 01:07:09.798981 | orchestrator | Thursday 10 April 2025 01:06:24 +0000 (0:00:13.041) 0:02:54.638 ******** 2025-04-10 01:07:09.798995 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.799009 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:07:09.799023 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:07:09.799037 | orchestrator | 2025-04-10 01:07:09.799051 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-04-10 01:07:09.799065 | orchestrator | Thursday 10 April 2025 01:06:33 +0000 (0:00:08.979) 0:03:03.618 ******** 2025-04-10 01:07:09.799079 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:07:09.799093 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:07:09.799107 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.799121 | orchestrator | 2025-04-10 01:07:09.799136 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-04-10 01:07:09.799150 | orchestrator | Thursday 10 April 2025 01:06:44 +0000 (0:00:11.423) 0:03:15.041 ******** 2025-04-10 01:07:09.799164 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:07:09.799178 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:07:09.799192 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.799206 | orchestrator | 2025-04-10 01:07:09.799220 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-04-10 01:07:09.799234 | orchestrator | Thursday 10 April 2025 01:07:00 +0000 (0:00:16.100) 0:03:31.141 ******** 2025-04-10 01:07:09.799255 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:07:09.799269 | orchestrator | 2025-04-10 01:07:09.799284 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:07:09.799304 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:07:12.815036 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 01:07:12.815160 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-10 01:07:12.815179 | orchestrator | 2025-04-10 01:07:12.815194 | orchestrator | 2025-04-10 01:07:12.815209 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:07:12.815225 | orchestrator | Thursday 10 April 2025 01:07:06 +0000 (0:00:05.907) 0:03:37.049 ******** 2025-04-10 01:07:12.815240 | orchestrator | =============================================================================== 2025-04-10 01:07:12.815254 | orchestrator | designate : Copying over designate.conf -------------------------------- 28.05s 2025-04-10 01:07:12.815268 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.34s 2025-04-10 01:07:12.815282 | orchestrator | designate : Restart designate-worker container ------------------------- 16.10s 2025-04-10 01:07:12.815296 | orchestrator | designate : Restart designate-api container ---------------------------- 13.09s 2025-04-10 01:07:12.815309 | orchestrator | designate : Restart designate-central container ------------------------ 13.04s 2025-04-10 01:07:12.815323 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.42s 2025-04-10 01:07:12.815337 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.94s 2025-04-10 01:07:12.815351 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.81s 2025-04-10 01:07:12.815467 | orchestrator | designate : Restart designate-producer container ------------------------ 8.98s 2025-04-10 01:07:12.815483 | orchestrator | designate : Copying over config.json files for services ----------------- 8.30s 2025-04-10 01:07:12.815497 | orchestrator | designate : Check designate containers ---------------------------------- 6.99s 2025-04-10 01:07:12.815511 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.71s 2025-04-10 01:07:12.815525 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.66s 2025-04-10 01:07:12.815559 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.91s 2025-04-10 01:07:12.815574 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.50s 2025-04-10 01:07:12.815588 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 4.68s 2025-04-10 01:07:12.815602 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.30s 2025-04-10 01:07:12.815616 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.14s 2025-04-10 01:07:12.815630 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.03s 2025-04-10 01:07:12.815644 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.91s 2025-04-10 01:07:12.815659 | orchestrator | 2025-04-10 01:07:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:12.815679 | orchestrator | 2025-04-10 01:07:09 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:12.815693 | orchestrator | 2025-04-10 01:07:09 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:12.815707 | orchestrator | 2025-04-10 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:12.815738 | orchestrator | 2025-04-10 01:07:12 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:12.816136 | orchestrator | 2025-04-10 01:07:12 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:12.816165 | orchestrator | 2025-04-10 01:07:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:12.816186 | orchestrator | 2025-04-10 01:07:12 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:12.816706 | orchestrator | 2025-04-10 01:07:12 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:15.849447 | orchestrator | 2025-04-10 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:15.849585 | orchestrator | 2025-04-10 01:07:15 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:15.851277 | orchestrator | 2025-04-10 01:07:15 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:15.854135 | orchestrator | 2025-04-10 01:07:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:15.855312 | orchestrator | 2025-04-10 01:07:15 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:15.855417 | orchestrator | 2025-04-10 01:07:15 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:18.892106 | orchestrator | 2025-04-10 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:18.892256 | orchestrator | 2025-04-10 01:07:18 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:18.893021 | orchestrator | 2025-04-10 01:07:18 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:18.894142 | orchestrator | 2025-04-10 01:07:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:18.895162 | orchestrator | 2025-04-10 01:07:18 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:18.896753 | orchestrator | 2025-04-10 01:07:18 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:21.953827 | orchestrator | 2025-04-10 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:21.954087 | orchestrator | 2025-04-10 01:07:21 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:21.954825 | orchestrator | 2025-04-10 01:07:21 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:21.955866 | orchestrator | 2025-04-10 01:07:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:21.957294 | orchestrator | 2025-04-10 01:07:21 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:21.959229 | orchestrator | 2025-04-10 01:07:21 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:25.025368 | orchestrator | 2025-04-10 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:25.025470 | orchestrator | 2025-04-10 01:07:25 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:25.027855 | orchestrator | 2025-04-10 01:07:25 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:25.029373 | orchestrator | 2025-04-10 01:07:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:25.030690 | orchestrator | 2025-04-10 01:07:25 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:25.031500 | orchestrator | 2025-04-10 01:07:25 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:28.089640 | orchestrator | 2025-04-10 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:28.089802 | orchestrator | 2025-04-10 01:07:28 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:28.091902 | orchestrator | 2025-04-10 01:07:28 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:28.093767 | orchestrator | 2025-04-10 01:07:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:28.095575 | orchestrator | 2025-04-10 01:07:28 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:28.097494 | orchestrator | 2025-04-10 01:07:28 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:31.148771 | orchestrator | 2025-04-10 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:31.148921 | orchestrator | 2025-04-10 01:07:31 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:31.149984 | orchestrator | 2025-04-10 01:07:31 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:31.151410 | orchestrator | 2025-04-10 01:07:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:31.152574 | orchestrator | 2025-04-10 01:07:31 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:31.153763 | orchestrator | 2025-04-10 01:07:31 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:34.208548 | orchestrator | 2025-04-10 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:34.208724 | orchestrator | 2025-04-10 01:07:34 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:34.210801 | orchestrator | 2025-04-10 01:07:34 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:34.213439 | orchestrator | 2025-04-10 01:07:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:34.216234 | orchestrator | 2025-04-10 01:07:34 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:34.218225 | orchestrator | 2025-04-10 01:07:34 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:37.253861 | orchestrator | 2025-04-10 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:37.254176 | orchestrator | 2025-04-10 01:07:37 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:37.254931 | orchestrator | 2025-04-10 01:07:37 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:37.255083 | orchestrator | 2025-04-10 01:07:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:37.255766 | orchestrator | 2025-04-10 01:07:37 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:37.256506 | orchestrator | 2025-04-10 01:07:37 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:37.256745 | orchestrator | 2025-04-10 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:40.291877 | orchestrator | 2025-04-10 01:07:40 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:40.293875 | orchestrator | 2025-04-10 01:07:40 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:40.293926 | orchestrator | 2025-04-10 01:07:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:40.295400 | orchestrator | 2025-04-10 01:07:40 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:40.296320 | orchestrator | 2025-04-10 01:07:40 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:40.296931 | orchestrator | 2025-04-10 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:43.354338 | orchestrator | 2025-04-10 01:07:43 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:43.355097 | orchestrator | 2025-04-10 01:07:43 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:43.355139 | orchestrator | 2025-04-10 01:07:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:43.355164 | orchestrator | 2025-04-10 01:07:43 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:43.356371 | orchestrator | 2025-04-10 01:07:43 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:46.409038 | orchestrator | 2025-04-10 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:46.409180 | orchestrator | 2025-04-10 01:07:46 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state STARTED 2025-04-10 01:07:46.410068 | orchestrator | 2025-04-10 01:07:46 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:46.411222 | orchestrator | 2025-04-10 01:07:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:46.413126 | orchestrator | 2025-04-10 01:07:46 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:46.413932 | orchestrator | 2025-04-10 01:07:46 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:46.414289 | orchestrator | 2025-04-10 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:49.460657 | orchestrator | 2025-04-10 01:07:49 | INFO  | Task f9af8a66-3afe-43b6-b3cb-5021eb0f335a is in state SUCCESS 2025-04-10 01:07:49.462368 | orchestrator | 2025-04-10 01:07:49 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:49.463079 | orchestrator | 2025-04-10 01:07:49 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:07:49.464306 | orchestrator | 2025-04-10 01:07:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:49.465023 | orchestrator | 2025-04-10 01:07:49 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:49.467731 | orchestrator | 2025-04-10 01:07:49 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:52.515358 | orchestrator | 2025-04-10 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:52.515497 | orchestrator | 2025-04-10 01:07:52 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:52.517996 | orchestrator | 2025-04-10 01:07:52 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:07:52.520606 | orchestrator | 2025-04-10 01:07:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:52.525391 | orchestrator | 2025-04-10 01:07:52 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:52.529574 | orchestrator | 2025-04-10 01:07:52 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:55.566853 | orchestrator | 2025-04-10 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:55.567027 | orchestrator | 2025-04-10 01:07:55 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:55.567569 | orchestrator | 2025-04-10 01:07:55 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:07:55.568654 | orchestrator | 2025-04-10 01:07:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:55.569535 | orchestrator | 2025-04-10 01:07:55 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:55.570391 | orchestrator | 2025-04-10 01:07:55 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:55.570576 | orchestrator | 2025-04-10 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:07:58.610136 | orchestrator | 2025-04-10 01:07:58 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:07:58.611337 | orchestrator | 2025-04-10 01:07:58 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:07:58.612318 | orchestrator | 2025-04-10 01:07:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:07:58.613277 | orchestrator | 2025-04-10 01:07:58 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:07:58.614256 | orchestrator | 2025-04-10 01:07:58 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:07:58.614834 | orchestrator | 2025-04-10 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:01.653568 | orchestrator | 2025-04-10 01:08:01 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:01.658206 | orchestrator | 2025-04-10 01:08:01 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:01.662405 | orchestrator | 2025-04-10 01:08:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:01.665853 | orchestrator | 2025-04-10 01:08:01 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:01.671915 | orchestrator | 2025-04-10 01:08:01 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:08:01.673776 | orchestrator | 2025-04-10 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:04.718450 | orchestrator | 2025-04-10 01:08:04 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:04.721168 | orchestrator | 2025-04-10 01:08:04 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:04.722682 | orchestrator | 2025-04-10 01:08:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:04.725203 | orchestrator | 2025-04-10 01:08:04 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:04.726372 | orchestrator | 2025-04-10 01:08:04 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:08:04.726524 | orchestrator | 2025-04-10 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:07.772690 | orchestrator | 2025-04-10 01:08:07 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:07.773737 | orchestrator | 2025-04-10 01:08:07 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:07.773865 | orchestrator | 2025-04-10 01:08:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:07.775766 | orchestrator | 2025-04-10 01:08:07 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:07.775810 | orchestrator | 2025-04-10 01:08:07 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state STARTED 2025-04-10 01:08:10.836801 | orchestrator | 2025-04-10 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:10.836920 | orchestrator | 2025-04-10 01:08:10 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:10.838596 | orchestrator | 2025-04-10 01:08:10 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:10.839284 | orchestrator | 2025-04-10 01:08:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:10.840999 | orchestrator | 2025-04-10 01:08:10 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:10.842759 | orchestrator | 2025-04-10 01:08:10 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:10.846276 | orchestrator | 2025-04-10 01:08:10.846326 | orchestrator | 2025-04-10 01:08:10.846342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:08:10.846357 | orchestrator | 2025-04-10 01:08:10.846644 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:08:10.846663 | orchestrator | Thursday 10 April 2025 01:07:13 +0000 (0:00:00.374) 0:00:00.374 ******** 2025-04-10 01:08:10.846678 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:08:10.846694 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:08:10.846708 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:08:10.846722 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:08:10.846736 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:08:10.846750 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:08:10.846764 | orchestrator | ok: [testbed-manager] 2025-04-10 01:08:10.846778 | orchestrator | 2025-04-10 01:08:10.846792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:08:10.846806 | orchestrator | Thursday 10 April 2025 01:07:14 +0000 (0:00:01.176) 0:00:01.551 ******** 2025-04-10 01:08:10.846820 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846835 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846849 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846862 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846876 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846890 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846905 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-04-10 01:08:10.846919 | orchestrator | 2025-04-10 01:08:10.846933 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-10 01:08:10.846947 | orchestrator | 2025-04-10 01:08:10.846986 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-04-10 01:08:10.847002 | orchestrator | Thursday 10 April 2025 01:07:15 +0000 (0:00:00.737) 0:00:02.289 ******** 2025-04-10 01:08:10.847018 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-04-10 01:08:10.847033 | orchestrator | 2025-04-10 01:08:10.847047 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-04-10 01:08:10.847061 | orchestrator | Thursday 10 April 2025 01:07:16 +0000 (0:00:01.248) 0:00:03.537 ******** 2025-04-10 01:08:10.847075 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-04-10 01:08:10.847089 | orchestrator | 2025-04-10 01:08:10.847103 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-04-10 01:08:10.847116 | orchestrator | Thursday 10 April 2025 01:07:20 +0000 (0:00:03.557) 0:00:07.094 ******** 2025-04-10 01:08:10.847131 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-04-10 01:08:10.847147 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-04-10 01:08:10.847188 | orchestrator | 2025-04-10 01:08:10.847202 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-04-10 01:08:10.847216 | orchestrator | Thursday 10 April 2025 01:07:27 +0000 (0:00:06.911) 0:00:14.006 ******** 2025-04-10 01:08:10.847230 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:08:10.847245 | orchestrator | 2025-04-10 01:08:10.847262 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-04-10 01:08:10.847279 | orchestrator | Thursday 10 April 2025 01:07:30 +0000 (0:00:03.535) 0:00:17.541 ******** 2025-04-10 01:08:10.847295 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:08:10.847311 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-04-10 01:08:10.847326 | orchestrator | 2025-04-10 01:08:10.847342 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-04-10 01:08:10.847358 | orchestrator | Thursday 10 April 2025 01:07:35 +0000 (0:00:04.298) 0:00:21.839 ******** 2025-04-10 01:08:10.847374 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:08:10.847391 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-04-10 01:08:10.847406 | orchestrator | 2025-04-10 01:08:10.847422 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-04-10 01:08:10.847438 | orchestrator | Thursday 10 April 2025 01:07:41 +0000 (0:00:06.629) 0:00:28.469 ******** 2025-04-10 01:08:10.847454 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-04-10 01:08:10.847469 | orchestrator | 2025-04-10 01:08:10.847486 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:08:10.847502 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847518 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847544 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847561 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847578 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847603 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847620 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:08:10.847635 | orchestrator | 2025-04-10 01:08:10.847649 | orchestrator | 2025-04-10 01:08:10.847663 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:08:10.847676 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:05.736) 0:00:34.206 ******** 2025-04-10 01:08:10.847690 | orchestrator | =============================================================================== 2025-04-10 01:08:10.847711 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.91s 2025-04-10 01:08:10.847726 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.63s 2025-04-10 01:08:10.847739 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.74s 2025-04-10 01:08:10.847753 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.30s 2025-04-10 01:08:10.847773 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.56s 2025-04-10 01:08:10.847787 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.54s 2025-04-10 01:08:10.847801 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.25s 2025-04-10 01:08:10.847822 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.18s 2025-04-10 01:08:10.847836 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-04-10 01:08:10.847850 | orchestrator | 2025-04-10 01:08:10.847864 | orchestrator | 2025-04-10 01:08:10.847877 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:08:10.847891 | orchestrator | 2025-04-10 01:08:10.847904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:08:10.847918 | orchestrator | Thursday 10 April 2025 01:05:52 +0000 (0:00:00.721) 0:00:00.721 ******** 2025-04-10 01:08:10.847932 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:08:10.847946 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:08:10.847980 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:08:10.848001 | orchestrator | 2025-04-10 01:08:10.848016 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:08:10.848030 | orchestrator | Thursday 10 April 2025 01:05:52 +0000 (0:00:00.450) 0:00:01.171 ******** 2025-04-10 01:08:10.848044 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-04-10 01:08:10.848058 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-04-10 01:08:10.848072 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-04-10 01:08:10.848086 | orchestrator | 2025-04-10 01:08:10.848100 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-04-10 01:08:10.848113 | orchestrator | 2025-04-10 01:08:10.848127 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-10 01:08:10.848141 | orchestrator | Thursday 10 April 2025 01:05:52 +0000 (0:00:00.246) 0:00:01.418 ******** 2025-04-10 01:08:10.848155 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:08:10.848169 | orchestrator | 2025-04-10 01:08:10.848183 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-04-10 01:08:10.848197 | orchestrator | Thursday 10 April 2025 01:05:54 +0000 (0:00:01.153) 0:00:02.572 ******** 2025-04-10 01:08:10.848211 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-04-10 01:08:10.848225 | orchestrator | 2025-04-10 01:08:10.848239 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-04-10 01:08:10.848252 | orchestrator | Thursday 10 April 2025 01:05:57 +0000 (0:00:03.655) 0:00:06.227 ******** 2025-04-10 01:08:10.848266 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-04-10 01:08:10.848280 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-04-10 01:08:10.848294 | orchestrator | 2025-04-10 01:08:10.848308 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-04-10 01:08:10.848323 | orchestrator | Thursday 10 April 2025 01:06:04 +0000 (0:00:06.737) 0:00:12.965 ******** 2025-04-10 01:08:10.848336 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:08:10.848350 | orchestrator | 2025-04-10 01:08:10.848364 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-04-10 01:08:10.848378 | orchestrator | Thursday 10 April 2025 01:06:08 +0000 (0:00:03.910) 0:00:16.875 ******** 2025-04-10 01:08:10.848392 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:08:10.848406 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-04-10 01:08:10.848420 | orchestrator | 2025-04-10 01:08:10.848434 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-04-10 01:08:10.848448 | orchestrator | Thursday 10 April 2025 01:06:12 +0000 (0:00:04.111) 0:00:20.987 ******** 2025-04-10 01:08:10.848461 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:08:10.848475 | orchestrator | 2025-04-10 01:08:10.848489 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-04-10 01:08:10.848503 | orchestrator | Thursday 10 April 2025 01:06:16 +0000 (0:00:03.591) 0:00:24.579 ******** 2025-04-10 01:08:10.848517 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-04-10 01:08:10.848537 | orchestrator | 2025-04-10 01:08:10.848552 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-04-10 01:08:10.848565 | orchestrator | Thursday 10 April 2025 01:06:20 +0000 (0:00:04.520) 0:00:29.100 ******** 2025-04-10 01:08:10.848579 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:10.848593 | orchestrator | 2025-04-10 01:08:10.848607 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-04-10 01:08:10.848635 | orchestrator | Thursday 10 April 2025 01:06:24 +0000 (0:00:03.466) 0:00:32.566 ******** 2025-04-10 01:08:10.848650 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:10.848664 | orchestrator | 2025-04-10 01:08:10.848678 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-04-10 01:08:10.848692 | orchestrator | Thursday 10 April 2025 01:06:28 +0000 (0:00:04.912) 0:00:37.479 ******** 2025-04-10 01:08:10.848706 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:10.848719 | orchestrator | 2025-04-10 01:08:10.848733 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-04-10 01:08:10.848747 | orchestrator | Thursday 10 April 2025 01:06:33 +0000 (0:00:04.042) 0:00:41.522 ******** 2025-04-10 01:08:10.848763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.848814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.848831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.848854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.848878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.848894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.848909 | orchestrator | 2025-04-10 01:08:10.848923 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-04-10 01:08:10.848937 | orchestrator | Thursday 10 April 2025 01:06:35 +0000 (0:00:02.846) 0:00:44.369 ******** 2025-04-10 01:08:10.848951 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.849129 | orchestrator | 2025-04-10 01:08:10.849164 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-04-10 01:08:10.849178 | orchestrator | Thursday 10 April 2025 01:06:36 +0000 (0:00:00.165) 0:00:44.534 ******** 2025-04-10 01:08:10.849193 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.849206 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.849220 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.849234 | orchestrator | 2025-04-10 01:08:10.849248 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-04-10 01:08:10.849262 | orchestrator | Thursday 10 April 2025 01:06:36 +0000 (0:00:00.772) 0:00:45.307 ******** 2025-04-10 01:08:10.849276 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:08:10.849289 | orchestrator | 2025-04-10 01:08:10.849303 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-04-10 01:08:10.849317 | orchestrator | Thursday 10 April 2025 01:06:38 +0000 (0:00:01.473) 0:00:46.780 ******** 2025-04-10 01:08:10.849332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.849378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.849393 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.849419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.849434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.849447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.849459 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.849478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.849491 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.849504 | orchestrator | 2025-04-10 01:08:10.849517 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-04-10 01:08:10.849529 | orchestrator | Thursday 10 April 2025 01:06:39 +0000 (0:00:01.454) 0:00:48.235 ******** 2025-04-10 01:08:10.849542 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.849554 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.849566 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.849578 | orchestrator | 2025-04-10 01:08:10.849591 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-10 01:08:10.849603 | orchestrator | Thursday 10 April 2025 01:06:40 +0000 (0:00:00.375) 0:00:48.610 ******** 2025-04-10 01:08:10.849616 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:08:10.849628 | orchestrator | 2025-04-10 01:08:10.849640 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-04-10 01:08:10.849653 | orchestrator | Thursday 10 April 2025 01:06:41 +0000 (0:00:01.131) 0:00:49.741 ******** 2025-04-10 01:08:10.849682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.849697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.849711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.849745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.849759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.849779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.849792 | orchestrator | 2025-04-10 01:08:10.849805 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-04-10 01:08:10.849817 | orchestrator | Thursday 10 April 2025 01:06:45 +0000 (0:00:04.707) 0:00:54.448 ******** 2025-04-10 01:08:10.849830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.849849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.849862 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.849883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.849903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.849916 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.849929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.849942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.849986 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.850000 | orchestrator | 2025-04-10 01:08:10.850012 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-04-10 01:08:10.850086 | orchestrator | Thursday 10 April 2025 01:06:49 +0000 (0:00:03.120) 0:00:57.569 ******** 2025-04-10 01:08:10.850099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.850124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.850138 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.850160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezon2025-04-10 01:08:10 | INFO  | Task 16297447-9869-407b-a30e-6424c72ecd2d is in state SUCCESS 2025-04-10 01:08:10.850176 | orchestrator | e:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.850191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.850211 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.850224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.850237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.850250 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.850263 | orchestrator | 2025-04-10 01:08:10.850275 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-04-10 01:08:10.850288 | orchestrator | Thursday 10 April 2025 01:06:52 +0000 (0:00:03.500) 0:01:01.069 ******** 2025-04-10 01:08:10.850307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850424 | orchestrator | 2025-04-10 01:08:10.850437 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-04-10 01:08:10.850450 | orchestrator | Thursday 10 April 2025 01:06:56 +0000 (0:00:03.609) 0:01:04.679 ******** 2025-04-10 01:08:10.850462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850571 | orchestrator | 2025-04-10 01:08:10.850584 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-04-10 01:08:10.850597 | orchestrator | Thursday 10 April 2025 01:07:07 +0000 (0:00:10.832) 0:01:15.512 ******** 2025-04-10 01:08:10.850609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.850622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.850635 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.850648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.850676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.850699 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.850713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-10 01:08:10.850726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:08:10.850739 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.850751 | orchestrator | 2025-04-10 01:08:10.850764 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-04-10 01:08:10.850776 | orchestrator | Thursday 10 April 2025 01:07:08 +0000 (0:00:01.898) 0:01:17.410 ******** 2025-04-10 01:08:10.850789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-10 01:08:10.850850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:08:10.850889 | orchestrator | 2025-04-10 01:08:10.850902 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-10 01:08:10.850918 | orchestrator | Thursday 10 April 2025 01:07:12 +0000 (0:00:03.469) 0:01:20.879 ******** 2025-04-10 01:08:10.850931 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:08:10.850944 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:08:10.850956 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:08:10.850990 | orchestrator | 2025-04-10 01:08:10.851003 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-04-10 01:08:10.851015 | orchestrator | Thursday 10 April 2025 01:07:12 +0000 (0:00:00.406) 0:01:21.286 ******** 2025-04-10 01:08:10.851033 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:10.851046 | orchestrator | 2025-04-10 01:08:10.851059 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-04-10 01:08:10.851071 | orchestrator | Thursday 10 April 2025 01:07:15 +0000 (0:00:02.695) 0:01:23.984 ******** 2025-04-10 01:08:10.851083 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:10.851095 | orchestrator | 2025-04-10 01:08:10.851108 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-04-10 01:08:10.851127 | orchestrator | Thursday 10 April 2025 01:07:17 +0000 (0:00:02.328) 0:01:26.313 ******** 2025-04-10 01:08:13.882351 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:13.882602 | orchestrator | 2025-04-10 01:08:13.882632 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-10 01:08:13.882649 | orchestrator | Thursday 10 April 2025 01:07:35 +0000 (0:00:17.876) 0:01:44.190 ******** 2025-04-10 01:08:13.882665 | orchestrator | 2025-04-10 01:08:13.882680 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-10 01:08:13.882695 | orchestrator | Thursday 10 April 2025 01:07:35 +0000 (0:00:00.066) 0:01:44.257 ******** 2025-04-10 01:08:13.882710 | orchestrator | 2025-04-10 01:08:13.882724 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-10 01:08:13.882739 | orchestrator | Thursday 10 April 2025 01:07:35 +0000 (0:00:00.217) 0:01:44.475 ******** 2025-04-10 01:08:13.882754 | orchestrator | 2025-04-10 01:08:13.882768 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-04-10 01:08:13.882783 | orchestrator | Thursday 10 April 2025 01:07:36 +0000 (0:00:00.065) 0:01:44.540 ******** 2025-04-10 01:08:13.882797 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:13.882812 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:08:13.882827 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:08:13.882841 | orchestrator | 2025-04-10 01:08:13.882856 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-04-10 01:08:13.882870 | orchestrator | Thursday 10 April 2025 01:07:50 +0000 (0:00:14.865) 0:01:59.406 ******** 2025-04-10 01:08:13.882885 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:08:13.882899 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:08:13.882914 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:08:13.882928 | orchestrator | 2025-04-10 01:08:13.882943 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:08:13.882959 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-10 01:08:13.883019 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:08:13.883034 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:08:13.883048 | orchestrator | 2025-04-10 01:08:13.883062 | orchestrator | 2025-04-10 01:08:13.883076 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:08:13.883090 | orchestrator | Thursday 10 April 2025 01:08:06 +0000 (0:00:16.044) 0:02:15.450 ******** 2025-04-10 01:08:13.883104 | orchestrator | =============================================================================== 2025-04-10 01:08:13.883118 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.88s 2025-04-10 01:08:13.883131 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.04s 2025-04-10 01:08:13.883145 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.87s 2025-04-10 01:08:13.883159 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 10.83s 2025-04-10 01:08:13.883173 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.74s 2025-04-10 01:08:13.883187 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.91s 2025-04-10 01:08:13.883227 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 4.71s 2025-04-10 01:08:13.883244 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.52s 2025-04-10 01:08:13.883261 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.11s 2025-04-10 01:08:13.883276 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.04s 2025-04-10 01:08:13.883292 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.91s 2025-04-10 01:08:13.883322 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.66s 2025-04-10 01:08:13.883340 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.61s 2025-04-10 01:08:13.883356 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.59s 2025-04-10 01:08:13.883372 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.50s 2025-04-10 01:08:13.883387 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.47s 2025-04-10 01:08:13.883403 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.47s 2025-04-10 01:08:13.883419 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 3.12s 2025-04-10 01:08:13.883435 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.85s 2025-04-10 01:08:13.883451 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.70s 2025-04-10 01:08:13.883466 | orchestrator | 2025-04-10 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:13.883582 | orchestrator | 2025-04-10 01:08:13 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:13.884535 | orchestrator | 2025-04-10 01:08:13 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:13.884641 | orchestrator | 2025-04-10 01:08:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:13.884675 | orchestrator | 2025-04-10 01:08:13 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:13.885218 | orchestrator | 2025-04-10 01:08:13 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:16.923811 | orchestrator | 2025-04-10 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:16.924032 | orchestrator | 2025-04-10 01:08:16 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:16.924562 | orchestrator | 2025-04-10 01:08:16 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:16.924602 | orchestrator | 2025-04-10 01:08:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:16.925612 | orchestrator | 2025-04-10 01:08:16 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:16.931387 | orchestrator | 2025-04-10 01:08:16 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:16.932142 | orchestrator | 2025-04-10 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:19.968724 | orchestrator | 2025-04-10 01:08:19 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:19.969604 | orchestrator | 2025-04-10 01:08:19 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:19.969648 | orchestrator | 2025-04-10 01:08:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:19.971241 | orchestrator | 2025-04-10 01:08:19 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:19.972072 | orchestrator | 2025-04-10 01:08:19 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:19.972417 | orchestrator | 2025-04-10 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:23.006199 | orchestrator | 2025-04-10 01:08:23 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:23.006579 | orchestrator | 2025-04-10 01:08:23 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:23.006621 | orchestrator | 2025-04-10 01:08:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:23.007165 | orchestrator | 2025-04-10 01:08:23 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:23.007949 | orchestrator | 2025-04-10 01:08:23 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:26.044317 | orchestrator | 2025-04-10 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:26.044456 | orchestrator | 2025-04-10 01:08:26 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:26.045137 | orchestrator | 2025-04-10 01:08:26 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:26.047244 | orchestrator | 2025-04-10 01:08:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:26.048469 | orchestrator | 2025-04-10 01:08:26 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:26.050661 | orchestrator | 2025-04-10 01:08:26 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:29.108685 | orchestrator | 2025-04-10 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:29.108776 | orchestrator | 2025-04-10 01:08:29 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:29.111636 | orchestrator | 2025-04-10 01:08:29 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:29.114618 | orchestrator | 2025-04-10 01:08:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:29.120132 | orchestrator | 2025-04-10 01:08:29 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:29.122870 | orchestrator | 2025-04-10 01:08:29 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:32.171831 | orchestrator | 2025-04-10 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:32.172015 | orchestrator | 2025-04-10 01:08:32 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:32.174154 | orchestrator | 2025-04-10 01:08:32 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:32.174210 | orchestrator | 2025-04-10 01:08:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:32.174811 | orchestrator | 2025-04-10 01:08:32 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:32.175214 | orchestrator | 2025-04-10 01:08:32 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:35.222600 | orchestrator | 2025-04-10 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:35.222741 | orchestrator | 2025-04-10 01:08:35 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:35.225618 | orchestrator | 2025-04-10 01:08:35 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:35.227096 | orchestrator | 2025-04-10 01:08:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:35.227170 | orchestrator | 2025-04-10 01:08:35 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:38.273065 | orchestrator | 2025-04-10 01:08:35 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:38.273198 | orchestrator | 2025-04-10 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:38.273237 | orchestrator | 2025-04-10 01:08:38 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:38.274822 | orchestrator | 2025-04-10 01:08:38 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:38.275639 | orchestrator | 2025-04-10 01:08:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:38.276525 | orchestrator | 2025-04-10 01:08:38 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:38.277706 | orchestrator | 2025-04-10 01:08:38 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:38.277806 | orchestrator | 2025-04-10 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:41.319395 | orchestrator | 2025-04-10 01:08:41 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:41.319618 | orchestrator | 2025-04-10 01:08:41 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:41.320666 | orchestrator | 2025-04-10 01:08:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:41.323249 | orchestrator | 2025-04-10 01:08:41 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:44.359658 | orchestrator | 2025-04-10 01:08:41 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:44.359787 | orchestrator | 2025-04-10 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:44.359822 | orchestrator | 2025-04-10 01:08:44 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:44.360417 | orchestrator | 2025-04-10 01:08:44 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:44.360457 | orchestrator | 2025-04-10 01:08:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:44.361331 | orchestrator | 2025-04-10 01:08:44 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:44.362838 | orchestrator | 2025-04-10 01:08:44 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:47.403463 | orchestrator | 2025-04-10 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:47.403592 | orchestrator | 2025-04-10 01:08:47 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:47.404518 | orchestrator | 2025-04-10 01:08:47 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:47.407788 | orchestrator | 2025-04-10 01:08:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:47.410151 | orchestrator | 2025-04-10 01:08:47 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:47.412785 | orchestrator | 2025-04-10 01:08:47 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:50.456468 | orchestrator | 2025-04-10 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:50.456637 | orchestrator | 2025-04-10 01:08:50 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:50.457421 | orchestrator | 2025-04-10 01:08:50 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:50.457890 | orchestrator | 2025-04-10 01:08:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:50.459212 | orchestrator | 2025-04-10 01:08:50 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:50.459888 | orchestrator | 2025-04-10 01:08:50 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:50.460118 | orchestrator | 2025-04-10 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:53.497630 | orchestrator | 2025-04-10 01:08:53 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:53.497877 | orchestrator | 2025-04-10 01:08:53 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:53.498240 | orchestrator | 2025-04-10 01:08:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:53.500311 | orchestrator | 2025-04-10 01:08:53 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:53.501090 | orchestrator | 2025-04-10 01:08:53 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:56.545474 | orchestrator | 2025-04-10 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:56.545771 | orchestrator | 2025-04-10 01:08:56 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:56.546426 | orchestrator | 2025-04-10 01:08:56 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:56.546470 | orchestrator | 2025-04-10 01:08:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:56.547882 | orchestrator | 2025-04-10 01:08:56 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:56.548434 | orchestrator | 2025-04-10 01:08:56 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:08:56.548944 | orchestrator | 2025-04-10 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:08:59.605859 | orchestrator | 2025-04-10 01:08:59 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:08:59.608375 | orchestrator | 2025-04-10 01:08:59 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:08:59.612106 | orchestrator | 2025-04-10 01:08:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:08:59.614430 | orchestrator | 2025-04-10 01:08:59 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:08:59.616702 | orchestrator | 2025-04-10 01:08:59 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:02.659435 | orchestrator | 2025-04-10 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:02.659576 | orchestrator | 2025-04-10 01:09:02 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state STARTED 2025-04-10 01:09:02.663302 | orchestrator | 2025-04-10 01:09:02 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:02.663648 | orchestrator | 2025-04-10 01:09:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:02.663680 | orchestrator | 2025-04-10 01:09:02 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:02.665852 | orchestrator | 2025-04-10 01:09:02 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:05.715503 | orchestrator | 2025-04-10 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:05.715673 | orchestrator | 2025-04-10 01:09:05 | INFO  | Task e1ad3211-ddb9-41fb-bd31-ee5b317383c4 is in state SUCCESS 2025-04-10 01:09:05.717334 | orchestrator | 2025-04-10 01:09:05.717495 | orchestrator | 2025-04-10 01:09:05.717514 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:09:05.717529 | orchestrator | 2025-04-10 01:09:05.717911 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:09:05.717929 | orchestrator | Thursday 10 April 2025 01:03:30 +0000 (0:00:00.680) 0:00:00.680 ******** 2025-04-10 01:09:05.717944 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:09:05.717960 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:09:05.718477 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:09:05.718503 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:09:05.718517 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:09:05.718531 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:09:05.718545 | orchestrator | 2025-04-10 01:09:05.718560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:09:05.718575 | orchestrator | Thursday 10 April 2025 01:03:31 +0000 (0:00:01.377) 0:00:02.057 ******** 2025-04-10 01:09:05.718589 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-04-10 01:09:05.718604 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-04-10 01:09:05.718618 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-04-10 01:09:05.718632 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-04-10 01:09:05.718646 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-04-10 01:09:05.718660 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-04-10 01:09:05.718674 | orchestrator | 2025-04-10 01:09:05.718688 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-04-10 01:09:05.718702 | orchestrator | 2025-04-10 01:09:05.718716 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-10 01:09:05.718730 | orchestrator | Thursday 10 April 2025 01:03:32 +0000 (0:00:01.071) 0:00:03.129 ******** 2025-04-10 01:09:05.718745 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:09:05.718761 | orchestrator | 2025-04-10 01:09:05.718775 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-04-10 01:09:05.718790 | orchestrator | Thursday 10 April 2025 01:03:34 +0000 (0:00:01.476) 0:00:04.605 ******** 2025-04-10 01:09:05.718804 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:09:05.718818 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:09:05.718832 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:09:05.718846 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:09:05.718860 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:09:05.718874 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:09:05.718888 | orchestrator | 2025-04-10 01:09:05.718902 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-04-10 01:09:05.718916 | orchestrator | Thursday 10 April 2025 01:03:35 +0000 (0:00:01.505) 0:00:06.111 ******** 2025-04-10 01:09:05.718930 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:09:05.718944 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:09:05.718958 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:09:05.718972 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:09:05.719034 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:09:05.719049 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:09:05.719063 | orchestrator | 2025-04-10 01:09:05.719673 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-04-10 01:09:05.719758 | orchestrator | Thursday 10 April 2025 01:03:37 +0000 (0:00:01.316) 0:00:07.428 ******** 2025-04-10 01:09:05.719782 | orchestrator | ok: [testbed-node-0] => { 2025-04-10 01:09:05.719797 | orchestrator |  "changed": false, 2025-04-10 01:09:05.719812 | orchestrator |  "msg": "All assertions passed" 2025-04-10 01:09:05.719843 | orchestrator | } 2025-04-10 01:09:05.719857 | orchestrator | ok: [testbed-node-1] => { 2025-04-10 01:09:05.719871 | orchestrator |  "changed": false, 2025-04-10 01:09:05.719885 | orchestrator |  "msg": "All assertions passed" 2025-04-10 01:09:05.719899 | orchestrator | } 2025-04-10 01:09:05.719913 | orchestrator | ok: [testbed-node-2] => { 2025-04-10 01:09:05.720125 | orchestrator |  "changed": false, 2025-04-10 01:09:05.720154 | orchestrator |  "msg": "All assertions passed" 2025-04-10 01:09:05.720169 | orchestrator | } 2025-04-10 01:09:05.720183 | orchestrator | ok: [testbed-node-3] => { 2025-04-10 01:09:05.720197 | orchestrator |  "changed": false, 2025-04-10 01:09:05.720661 | orchestrator |  "msg": "All assertions passed" 2025-04-10 01:09:05.720682 | orchestrator | } 2025-04-10 01:09:05.720695 | orchestrator | ok: [testbed-node-4] => { 2025-04-10 01:09:05.720708 | orchestrator |  "changed": false, 2025-04-10 01:09:05.720720 | orchestrator |  "msg": "All assertions passed" 2025-04-10 01:09:05.720733 | orchestrator | } 2025-04-10 01:09:05.720745 | orchestrator | ok: [testbed-node-5] => { 2025-04-10 01:09:05.720757 | orchestrator |  "changed": false, 2025-04-10 01:09:05.720770 | orchestrator |  "msg": "All assertions passed" 2025-04-10 01:09:05.720783 | orchestrator | } 2025-04-10 01:09:05.721348 | orchestrator | 2025-04-10 01:09:05.721371 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-04-10 01:09:05.721385 | orchestrator | Thursday 10 April 2025 01:03:38 +0000 (0:00:00.889) 0:00:08.317 ******** 2025-04-10 01:09:05.721397 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.721409 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.721422 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.721434 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.721447 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.721459 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.721472 | orchestrator | 2025-04-10 01:09:05.721484 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-04-10 01:09:05.721497 | orchestrator | Thursday 10 April 2025 01:03:39 +0000 (0:00:00.941) 0:00:09.259 ******** 2025-04-10 01:09:05.721510 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-04-10 01:09:05.721522 | orchestrator | 2025-04-10 01:09:05.721535 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-04-10 01:09:05.721547 | orchestrator | Thursday 10 April 2025 01:03:42 +0000 (0:00:03.815) 0:00:13.074 ******** 2025-04-10 01:09:05.721560 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-04-10 01:09:05.721575 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-04-10 01:09:05.721587 | orchestrator | 2025-04-10 01:09:05.721675 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-04-10 01:09:05.721697 | orchestrator | Thursday 10 April 2025 01:03:49 +0000 (0:00:06.578) 0:00:19.653 ******** 2025-04-10 01:09:05.721710 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:09:05.721723 | orchestrator | 2025-04-10 01:09:05.721745 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-04-10 01:09:05.721757 | orchestrator | Thursday 10 April 2025 01:03:53 +0000 (0:00:03.484) 0:00:23.137 ******** 2025-04-10 01:09:05.721770 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:09:05.721782 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-04-10 01:09:05.721795 | orchestrator | 2025-04-10 01:09:05.721807 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-04-10 01:09:05.721820 | orchestrator | Thursday 10 April 2025 01:03:57 +0000 (0:00:04.059) 0:00:27.197 ******** 2025-04-10 01:09:05.721832 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:09:05.721845 | orchestrator | 2025-04-10 01:09:05.721857 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-04-10 01:09:05.721869 | orchestrator | Thursday 10 April 2025 01:04:00 +0000 (0:00:03.506) 0:00:30.703 ******** 2025-04-10 01:09:05.722234 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-04-10 01:09:05.722269 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-04-10 01:09:05.722282 | orchestrator | 2025-04-10 01:09:05.722295 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-10 01:09:05.722307 | orchestrator | Thursday 10 April 2025 01:04:09 +0000 (0:00:09.073) 0:00:39.777 ******** 2025-04-10 01:09:05.722320 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.722332 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.722344 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.722357 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.722369 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.722381 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.722394 | orchestrator | 2025-04-10 01:09:05.722406 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-04-10 01:09:05.722419 | orchestrator | Thursday 10 April 2025 01:04:11 +0000 (0:00:01.468) 0:00:41.246 ******** 2025-04-10 01:09:05.722431 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.722443 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.722456 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.722468 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.722480 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.723728 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.723745 | orchestrator | 2025-04-10 01:09:05.723756 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-04-10 01:09:05.723767 | orchestrator | Thursday 10 April 2025 01:04:17 +0000 (0:00:06.072) 0:00:47.319 ******** 2025-04-10 01:09:05.723778 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:09:05.723788 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:09:05.723799 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:09:05.723809 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:09:05.723819 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:09:05.723829 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:09:05.723839 | orchestrator | 2025-04-10 01:09:05.723850 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-10 01:09:05.723860 | orchestrator | Thursday 10 April 2025 01:04:19 +0000 (0:00:02.048) 0:00:49.367 ******** 2025-04-10 01:09:05.723870 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.723880 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.723890 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.723900 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.723910 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.724572 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.724583 | orchestrator | 2025-04-10 01:09:05.724594 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-04-10 01:09:05.724605 | orchestrator | Thursday 10 April 2025 01:04:24 +0000 (0:00:05.187) 0:00:54.554 ******** 2025-04-10 01:09:05.724637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.724785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.724818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.724831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.724847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.724864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.726476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.726494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.726545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.726680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.726832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.726856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.726908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.726968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.727148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.727170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.727271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.727294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.727394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.727446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.727508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.727530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.727612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.727745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.727795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.727856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.727872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.727894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.727909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.727957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.727972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.728007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.728048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.728136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.728167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.728198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.728230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.728265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.728280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.728328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.728343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.728358 | orchestrator | 2025-04-10 01:09:05.728373 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-04-10 01:09:05.728388 | orchestrator | Thursday 10 April 2025 01:04:29 +0000 (0:00:04.817) 0:00:59.372 ******** 2025-04-10 01:09:05.728402 | orchestrator | [WARNING]: Skipped 2025-04-10 01:09:05.728425 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-04-10 01:09:05.728448 | orchestrator | due to this access issue: 2025-04-10 01:09:05.728479 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-04-10 01:09:05.728503 | orchestrator | a directory 2025-04-10 01:09:05.728527 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:09:05.728551 | orchestrator | 2025-04-10 01:09:05.728583 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-10 01:09:05.728604 | orchestrator | Thursday 10 April 2025 01:04:30 +0000 (0:00:00.910) 0:01:00.283 ******** 2025-04-10 01:09:05.728628 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:09:05.728652 | orchestrator | 2025-04-10 01:09:05.728676 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-04-10 01:09:05.728700 | orchestrator | Thursday 10 April 2025 01:04:31 +0000 (0:00:01.842) 0:01:02.125 ******** 2025-04-10 01:09:05.728715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.728730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.728761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.728777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.728801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.730130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.730167 | orchestrator | 2025-04-10 01:09:05.730177 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-04-10 01:09:05.730186 | orchestrator | Thursday 10 April 2025 01:04:37 +0000 (0:00:05.265) 0:01:07.391 ******** 2025-04-10 01:09:05.730196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730206 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.730233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730243 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.730252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730270 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.730279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730288 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.730304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730313 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.730329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730339 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.730347 | orchestrator | 2025-04-10 01:09:05.730356 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-04-10 01:09:05.730365 | orchestrator | Thursday 10 April 2025 01:04:42 +0000 (0:00:05.135) 0:01:12.526 ******** 2025-04-10 01:09:05.730374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730387 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.730396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730405 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.730419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730429 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.730443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730453 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.730462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730471 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.730480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730494 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.730503 | orchestrator | 2025-04-10 01:09:05.730511 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-04-10 01:09:05.730520 | orchestrator | Thursday 10 April 2025 01:04:47 +0000 (0:00:05.271) 0:01:17.797 ******** 2025-04-10 01:09:05.730529 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.730537 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.730546 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.730554 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.730563 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.730572 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.730580 | orchestrator | 2025-04-10 01:09:05.730589 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-04-10 01:09:05.730597 | orchestrator | Thursday 10 April 2025 01:04:52 +0000 (0:00:04.746) 0:01:22.544 ******** 2025-04-10 01:09:05.730660 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.730670 | orchestrator | 2025-04-10 01:09:05.730681 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-04-10 01:09:05.730690 | orchestrator | Thursday 10 April 2025 01:04:52 +0000 (0:00:00.227) 0:01:22.772 ******** 2025-04-10 01:09:05.730699 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.730708 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.730716 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.730725 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.730733 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.730742 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.730750 | orchestrator | 2025-04-10 01:09:05.730759 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-04-10 01:09:05.730768 | orchestrator | Thursday 10 April 2025 01:04:54 +0000 (0:00:01.891) 0:01:24.663 ******** 2025-04-10 01:09:05.730801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.730812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.730832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.730842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.730852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.730866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.730876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.730885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.730898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.730913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.730924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.730934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.730949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.730967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.731047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731065 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.731079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.731089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.731154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.731239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.731285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731303 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.731312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.731326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.731373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.731451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.731477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.731511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.731559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.731635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731644 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.731653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.731662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.731796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.731922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.731932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.731950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.731959 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.731968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.731992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.732030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.732039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.732057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732087 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.732109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.732128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.732147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.732160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.732190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.732209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.732218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.732255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.732264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732273 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.732282 | orchestrator | 2025-04-10 01:09:05.732291 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-04-10 01:09:05.732300 | orchestrator | Thursday 10 April 2025 01:05:00 +0000 (0:00:06.084) 0:01:30.748 ******** 2025-04-10 01:09:05.732309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.732318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.732344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.734122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.734190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.734230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.734292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.734321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.734392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.734401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.734545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.734554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.734595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.734682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.734724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.734743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.734783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.734827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.734841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.734908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.734940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.734950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.734969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.734996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.735060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.735069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.735141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.735164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.735179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.735201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.735223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.735250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.735304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.735320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.735329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.735356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.735374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.735399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.735443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.735452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735461 | orchestrator | 2025-04-10 01:09:05.735470 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-04-10 01:09:05.735479 | orchestrator | Thursday 10 April 2025 01:05:05 +0000 (0:00:05.234) 0:01:35.982 ******** 2025-04-10 01:09:05.735488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.735497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.735543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.735608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.735661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.735720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.735765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.735820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.735872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.735932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.735950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.735959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.735972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.736001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.736034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.736088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.736167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.736205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.736238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.736257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.736302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.736318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.736341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.736399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.736434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.736482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.736562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.736618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.736632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.736856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.736866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.736892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.736901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736910 | orchestrator | 2025-04-10 01:09:05.736919 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-04-10 01:09:05.736928 | orchestrator | Thursday 10 April 2025 01:05:13 +0000 (0:00:07.692) 0:01:43.675 ******** 2025-04-10 01:09:05.736941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.736950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.736973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.737066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.737115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.737170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.737254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.737272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.737320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737353 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.737367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.737377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.737405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.737451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.737529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.737567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737584 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.737592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.737601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.737642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.737692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.737721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.737760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.737810 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.737818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.737827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.737868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.737897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.737917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value'2025-04-10 01:09:05 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:05.738079 | orchestrator | 2025-04-10 01:09:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:05.738100 | orchestrator | 2025-04-10 01:09:05 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:05.738109 | orchestrator | 2025-04-10 01:09:05 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:05.738117 | orchestrator | 2025-04-10 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:05.738132 | orchestrator | : {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.738142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.738167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.738199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.738277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.738307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738327 | orchestrator | 2025-04-10 01:09:05.738335 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-04-10 01:09:05.738344 | orchestrator | Thursday 10 April 2025 01:05:16 +0000 (0:00:03.290) 0:01:46.966 ******** 2025-04-10 01:09:05.738352 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.738360 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:09:05.738368 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.738376 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.738388 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:09:05.738396 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:09:05.738404 | orchestrator | 2025-04-10 01:09:05.738412 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-04-10 01:09:05.738420 | orchestrator | Thursday 10 April 2025 01:05:22 +0000 (0:00:05.977) 0:01:52.944 ******** 2025-04-10 01:09:05.738428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.738437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.738478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.738546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.738572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738596 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.738605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.738613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.738654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.738734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.738764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738787 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.738801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.738811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.738856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.738932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.738942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.738951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.738961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.738971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739002 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.739016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.739026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.739046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.739087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.739160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.739169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.739279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.739345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.739354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.739371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.739393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.739422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.739431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.739453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.739495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.739551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.739573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.739582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.739603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.739612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.739624 | orchestrator | 2025-04-10 01:09:05.739633 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-04-10 01:09:05.739641 | orchestrator | Thursday 10 April 2025 01:05:29 +0000 (0:00:06.202) 0:01:59.146 ******** 2025-04-10 01:09:05.739649 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.739657 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.739665 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.739673 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.739681 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.739689 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.739697 | orchestrator | 2025-04-10 01:09:05.739705 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-04-10 01:09:05.739713 | orchestrator | Thursday 10 April 2025 01:05:31 +0000 (0:00:02.629) 0:02:01.776 ******** 2025-04-10 01:09:05.739721 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.739729 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.739740 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.739748 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.739756 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.739764 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.739772 | orchestrator | 2025-04-10 01:09:05.739780 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-04-10 01:09:05.739788 | orchestrator | Thursday 10 April 2025 01:05:34 +0000 (0:00:02.584) 0:02:04.361 ******** 2025-04-10 01:09:05.739796 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.739821 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.739830 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.739838 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.739846 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.739854 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.739862 | orchestrator | 2025-04-10 01:09:05.739870 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-04-10 01:09:05.739878 | orchestrator | Thursday 10 April 2025 01:05:37 +0000 (0:00:03.581) 0:02:07.942 ******** 2025-04-10 01:09:05.739886 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.739894 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.739902 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.739912 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.739921 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.739930 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.739940 | orchestrator | 2025-04-10 01:09:05.739949 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-04-10 01:09:05.739959 | orchestrator | Thursday 10 April 2025 01:05:39 +0000 (0:00:01.895) 0:02:09.837 ******** 2025-04-10 01:09:05.739968 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.740022 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.740032 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.740042 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.740051 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.740060 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.740068 | orchestrator | 2025-04-10 01:09:05.740076 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-04-10 01:09:05.740084 | orchestrator | Thursday 10 April 2025 01:05:41 +0000 (0:00:02.224) 0:02:12.062 ******** 2025-04-10 01:09:05.740092 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.740100 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.740108 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.740124 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.740136 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.740144 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.740152 | orchestrator | 2025-04-10 01:09:05.740160 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-04-10 01:09:05.740168 | orchestrator | Thursday 10 April 2025 01:05:44 +0000 (0:00:02.269) 0:02:14.331 ******** 2025-04-10 01:09:05.740176 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-10 01:09:05.740184 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.740192 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-10 01:09:05.740200 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.740209 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-10 01:09:05.740217 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.740225 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-10 01:09:05.740233 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.740241 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-10 01:09:05.740249 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.740256 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-10 01:09:05.740263 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.740270 | orchestrator | 2025-04-10 01:09:05.740277 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-04-10 01:09:05.740284 | orchestrator | Thursday 10 April 2025 01:05:48 +0000 (0:00:04.434) 0:02:18.765 ******** 2025-04-10 01:09:05.740291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.740299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.740337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.740397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.740412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.740445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.740456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740463 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.740470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.740484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.740521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.740570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.740585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.740617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.740627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740635 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.740642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.740650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.740692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.740741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.740759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.740791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.740799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740806 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.740816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.740824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.740866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.740940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.740965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.741037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.741095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.741142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.741149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741156 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.741201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.741215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.741259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.741355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.741370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.741449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.741457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741464 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.741471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.741514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.741584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.741694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.741709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.741716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.741789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.741797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741804 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.741811 | orchestrator | 2025-04-10 01:09:05.741819 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-04-10 01:09:05.741826 | orchestrator | Thursday 10 April 2025 01:05:53 +0000 (0:00:04.451) 0:02:23.216 ******** 2025-04-10 01:09:05.741833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.741887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.741959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.741972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.742175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.742215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.742314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.742322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742329 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.742336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.742343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.742444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.742570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.742586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.742676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.742684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742691 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.742698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.742704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.742815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.742918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.742940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.742947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.742962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.743060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743090 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.743098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.743105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.743118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.743272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.743316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.743481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.743589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.743671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.743725 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.743732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743770 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.743777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.743784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.743855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.743931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.743938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.743955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.743962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.743968 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.743974 | orchestrator | 2025-04-10 01:09:05.743998 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-04-10 01:09:05.744005 | orchestrator | Thursday 10 April 2025 01:05:55 +0000 (0:00:02.545) 0:02:25.762 ******** 2025-04-10 01:09:05.744011 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744017 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744023 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744030 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744042 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744048 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744055 | orchestrator | 2025-04-10 01:09:05.744074 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-04-10 01:09:05.744081 | orchestrator | Thursday 10 April 2025 01:05:58 +0000 (0:00:02.475) 0:02:28.238 ******** 2025-04-10 01:09:05.744087 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744093 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744099 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744106 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:09:05.744112 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:09:05.744118 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:09:05.744124 | orchestrator | 2025-04-10 01:09:05.744130 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-04-10 01:09:05.744137 | orchestrator | Thursday 10 April 2025 01:06:06 +0000 (0:00:08.671) 0:02:36.910 ******** 2025-04-10 01:09:05.744143 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744149 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744158 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744165 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744171 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744177 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744183 | orchestrator | 2025-04-10 01:09:05.744190 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-04-10 01:09:05.744196 | orchestrator | Thursday 10 April 2025 01:06:09 +0000 (0:00:02.410) 0:02:39.320 ******** 2025-04-10 01:09:05.744202 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744208 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744214 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744221 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744227 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744233 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744239 | orchestrator | 2025-04-10 01:09:05.744245 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-04-10 01:09:05.744251 | orchestrator | Thursday 10 April 2025 01:06:12 +0000 (0:00:03.712) 0:02:43.032 ******** 2025-04-10 01:09:05.744257 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744264 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744270 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744276 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744282 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744288 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744294 | orchestrator | 2025-04-10 01:09:05.744301 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-04-10 01:09:05.744308 | orchestrator | Thursday 10 April 2025 01:06:17 +0000 (0:00:04.365) 0:02:47.398 ******** 2025-04-10 01:09:05.744315 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744322 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744329 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744336 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744343 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744351 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744358 | orchestrator | 2025-04-10 01:09:05.744365 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-04-10 01:09:05.744372 | orchestrator | Thursday 10 April 2025 01:06:20 +0000 (0:00:03.561) 0:02:50.959 ******** 2025-04-10 01:09:05.744379 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744387 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744394 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744401 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744408 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744416 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744423 | orchestrator | 2025-04-10 01:09:05.744430 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-04-10 01:09:05.744437 | orchestrator | Thursday 10 April 2025 01:06:24 +0000 (0:00:03.788) 0:02:54.748 ******** 2025-04-10 01:09:05.744444 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744451 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744459 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744466 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744473 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744480 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744487 | orchestrator | 2025-04-10 01:09:05.744494 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-04-10 01:09:05.744501 | orchestrator | Thursday 10 April 2025 01:06:31 +0000 (0:00:07.109) 0:03:01.858 ******** 2025-04-10 01:09:05.744508 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744515 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744522 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744529 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744540 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744547 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744555 | orchestrator | 2025-04-10 01:09:05.744562 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-04-10 01:09:05.744569 | orchestrator | Thursday 10 April 2025 01:06:35 +0000 (0:00:03.656) 0:03:05.514 ******** 2025-04-10 01:09:05.744576 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744586 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744593 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744600 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744607 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744614 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744621 | orchestrator | 2025-04-10 01:09:05.744629 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-04-10 01:09:05.744636 | orchestrator | Thursday 10 April 2025 01:06:39 +0000 (0:00:04.153) 0:03:09.668 ******** 2025-04-10 01:09:05.744643 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-10 01:09:05.744651 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744658 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-10 01:09:05.744665 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.744671 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-10 01:09:05.744677 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.744696 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-10 01:09:05.744703 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.744709 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-10 01:09:05.744716 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.744722 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-10 01:09:05.744728 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.744735 | orchestrator | 2025-04-10 01:09:05.744741 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-04-10 01:09:05.744747 | orchestrator | Thursday 10 April 2025 01:06:43 +0000 (0:00:03.611) 0:03:13.280 ******** 2025-04-10 01:09:05.744759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.744766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.744810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.744829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.744841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.744866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.744886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.744893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.744910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.744933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744941 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.744947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.744954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.744972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.745043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.745118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.745142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745155 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.745178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.745186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.745228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.745296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.745326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745340 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.745346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.745368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.745409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.745473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.745513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745525 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.745531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.745553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.745584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.745647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.745686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745698 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.745705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.745716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.745756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.745820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.745859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.745865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745871 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.745877 | orchestrator | 2025-04-10 01:09:05.745883 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-04-10 01:09:05.745889 | orchestrator | Thursday 10 April 2025 01:06:48 +0000 (0:00:05.769) 0:03:19.049 ******** 2025-04-10 01:09:05.745895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.745907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.745947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.745954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.745975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.746013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-10 01:09:05.746072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.746079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.746129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.746157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.746190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.746215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.746222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.746256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.746337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.746346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.746379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.746422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.746477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.746504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.746528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-10 01:09:05.746546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.746552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.746579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.746603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.746634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-10 01:09:05.746661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-10 01:09:05.746694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:09:05.746749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:09:05.746757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-10 01:09:05.746779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-10 01:09:05.746786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-10 01:09:05.746792 | orchestrator | 2025-04-10 01:09:05.746798 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-10 01:09:05.746804 | orchestrator | Thursday 10 April 2025 01:06:55 +0000 (0:00:06.281) 0:03:25.330 ******** 2025-04-10 01:09:05.746810 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:09:05.746816 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:09:05.746822 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:09:05.746828 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:09:05.746833 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:09:05.746839 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:09:05.746845 | orchestrator | 2025-04-10 01:09:05.746851 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-04-10 01:09:05.746857 | orchestrator | Thursday 10 April 2025 01:06:55 +0000 (0:00:00.699) 0:03:26.030 ******** 2025-04-10 01:09:05.746863 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:09:05.746869 | orchestrator | 2025-04-10 01:09:05.746875 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-04-10 01:09:05.746881 | orchestrator | Thursday 10 April 2025 01:06:58 +0000 (0:00:02.800) 0:03:28.831 ******** 2025-04-10 01:09:05.746887 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:09:05.746893 | orchestrator | 2025-04-10 01:09:05.746901 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-04-10 01:09:05.746911 | orchestrator | Thursday 10 April 2025 01:07:01 +0000 (0:00:02.738) 0:03:31.569 ******** 2025-04-10 01:09:05.746917 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:09:05.746922 | orchestrator | 2025-04-10 01:09:05.746928 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-10 01:09:05.746934 | orchestrator | Thursday 10 April 2025 01:07:46 +0000 (0:00:45.497) 0:04:17.066 ******** 2025-04-10 01:09:05.746940 | orchestrator | 2025-04-10 01:09:05.746946 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-10 01:09:05.746952 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:00.067) 0:04:17.134 ******** 2025-04-10 01:09:05.746958 | orchestrator | 2025-04-10 01:09:05.746966 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-10 01:09:08.775055 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:00.266) 0:04:17.401 ******** 2025-04-10 01:09:08.775182 | orchestrator | 2025-04-10 01:09:08.775201 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-10 01:09:08.775215 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:00.065) 0:04:17.467 ******** 2025-04-10 01:09:08.775227 | orchestrator | 2025-04-10 01:09:08.775240 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-10 01:09:08.775253 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:00.059) 0:04:17.526 ******** 2025-04-10 01:09:08.775265 | orchestrator | 2025-04-10 01:09:08.775278 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-10 01:09:08.775290 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:00.066) 0:04:17.593 ******** 2025-04-10 01:09:08.775302 | orchestrator | 2025-04-10 01:09:08.775315 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-04-10 01:09:08.775327 | orchestrator | Thursday 10 April 2025 01:07:47 +0000 (0:00:00.302) 0:04:17.896 ******** 2025-04-10 01:09:08.775340 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:09:08.775354 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:09:08.775367 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:09:08.775462 | orchestrator | 2025-04-10 01:09:08.775478 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-04-10 01:09:08.775491 | orchestrator | Thursday 10 April 2025 01:08:15 +0000 (0:00:27.432) 0:04:45.329 ******** 2025-04-10 01:09:08.775526 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:09:08.775539 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:09:08.775552 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:09:08.775564 | orchestrator | 2025-04-10 01:09:08.775576 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:09:08.775590 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-10 01:09:08.775604 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-10 01:09:08.775617 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-10 01:09:08.775646 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-10 01:09:08.775660 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-10 01:09:08.775672 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-10 01:09:08.775685 | orchestrator | 2025-04-10 01:09:08.775697 | orchestrator | 2025-04-10 01:09:08.775709 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:09:08.775722 | orchestrator | Thursday 10 April 2025 01:09:03 +0000 (0:00:48.468) 0:05:33.797 ******** 2025-04-10 01:09:08.775757 | orchestrator | =============================================================================== 2025-04-10 01:09:08.775770 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 48.47s 2025-04-10 01:09:08.775782 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.50s 2025-04-10 01:09:08.775794 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.43s 2025-04-10 01:09:08.775806 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 9.07s 2025-04-10 01:09:08.775819 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 8.67s 2025-04-10 01:09:08.775831 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.69s 2025-04-10 01:09:08.775844 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 7.11s 2025-04-10 01:09:08.775856 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.58s 2025-04-10 01:09:08.775868 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.28s 2025-04-10 01:09:08.775886 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.20s 2025-04-10 01:09:08.775899 | orchestrator | neutron : Copying over existing policy file ----------------------------- 6.08s 2025-04-10 01:09:08.775912 | orchestrator | Load and persist kernel modules ----------------------------------------- 6.07s 2025-04-10 01:09:08.775924 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.98s 2025-04-10 01:09:08.775936 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 5.77s 2025-04-10 01:09:08.775949 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.27s 2025-04-10 01:09:08.775961 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.27s 2025-04-10 01:09:08.775973 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.23s 2025-04-10 01:09:08.776012 | orchestrator | Setting sysctl values --------------------------------------------------- 5.19s 2025-04-10 01:09:08.776025 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.14s 2025-04-10 01:09:08.776038 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.82s 2025-04-10 01:09:08.776068 | orchestrator | 2025-04-10 01:09:08 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:08.777662 | orchestrator | 2025-04-10 01:09:08 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:08.777693 | orchestrator | 2025-04-10 01:09:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:08.779881 | orchestrator | 2025-04-10 01:09:08 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:08.783624 | orchestrator | 2025-04-10 01:09:08 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:11.923728 | orchestrator | 2025-04-10 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:11.923864 | orchestrator | 2025-04-10 01:09:11 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:11.927388 | orchestrator | 2025-04-10 01:09:11 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:11.930973 | orchestrator | 2025-04-10 01:09:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:11.933911 | orchestrator | 2025-04-10 01:09:11 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:11.934933 | orchestrator | 2025-04-10 01:09:11 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:14.989430 | orchestrator | 2025-04-10 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:14.989601 | orchestrator | 2025-04-10 01:09:14 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:14.989848 | orchestrator | 2025-04-10 01:09:14 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:14.990502 | orchestrator | 2025-04-10 01:09:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:14.991452 | orchestrator | 2025-04-10 01:09:14 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:14.992180 | orchestrator | 2025-04-10 01:09:14 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:18.039310 | orchestrator | 2025-04-10 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:18.039444 | orchestrator | 2025-04-10 01:09:18 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:18.039820 | orchestrator | 2025-04-10 01:09:18 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:18.040635 | orchestrator | 2025-04-10 01:09:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:18.041271 | orchestrator | 2025-04-10 01:09:18 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:18.042616 | orchestrator | 2025-04-10 01:09:18 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:21.069412 | orchestrator | 2025-04-10 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:21.069541 | orchestrator | 2025-04-10 01:09:21 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:21.070642 | orchestrator | 2025-04-10 01:09:21 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:21.071114 | orchestrator | 2025-04-10 01:09:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:21.071662 | orchestrator | 2025-04-10 01:09:21 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:21.072180 | orchestrator | 2025-04-10 01:09:21 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:24.108059 | orchestrator | 2025-04-10 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:24.108209 | orchestrator | 2025-04-10 01:09:24 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:24.108326 | orchestrator | 2025-04-10 01:09:24 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:24.108352 | orchestrator | 2025-04-10 01:09:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:24.108826 | orchestrator | 2025-04-10 01:09:24 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:24.109332 | orchestrator | 2025-04-10 01:09:24 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:27.139473 | orchestrator | 2025-04-10 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:27.139620 | orchestrator | 2025-04-10 01:09:27 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:27.139945 | orchestrator | 2025-04-10 01:09:27 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:27.139978 | orchestrator | 2025-04-10 01:09:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:27.141453 | orchestrator | 2025-04-10 01:09:27 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:27.142333 | orchestrator | 2025-04-10 01:09:27 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:30.191718 | orchestrator | 2025-04-10 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:30.191885 | orchestrator | 2025-04-10 01:09:30 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:30.193130 | orchestrator | 2025-04-10 01:09:30 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:30.193852 | orchestrator | 2025-04-10 01:09:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:30.194756 | orchestrator | 2025-04-10 01:09:30 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:30.195915 | orchestrator | 2025-04-10 01:09:30 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:33.235678 | orchestrator | 2025-04-10 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:33.235821 | orchestrator | 2025-04-10 01:09:33 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:33.238556 | orchestrator | 2025-04-10 01:09:33 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:33.238604 | orchestrator | 2025-04-10 01:09:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:33.238938 | orchestrator | 2025-04-10 01:09:33 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:33.239708 | orchestrator | 2025-04-10 01:09:33 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:33.240396 | orchestrator | 2025-04-10 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:36.267734 | orchestrator | 2025-04-10 01:09:36 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:36.268140 | orchestrator | 2025-04-10 01:09:36 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:36.268185 | orchestrator | 2025-04-10 01:09:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:36.268601 | orchestrator | 2025-04-10 01:09:36 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:36.269114 | orchestrator | 2025-04-10 01:09:36 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:36.269249 | orchestrator | 2025-04-10 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:39.313784 | orchestrator | 2025-04-10 01:09:39 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:39.314248 | orchestrator | 2025-04-10 01:09:39 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:39.314292 | orchestrator | 2025-04-10 01:09:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:39.314890 | orchestrator | 2025-04-10 01:09:39 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:39.315475 | orchestrator | 2025-04-10 01:09:39 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:42.345289 | orchestrator | 2025-04-10 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:42.345398 | orchestrator | 2025-04-10 01:09:42 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:45.369933 | orchestrator | 2025-04-10 01:09:42 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:45.370155 | orchestrator | 2025-04-10 01:09:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:45.370208 | orchestrator | 2025-04-10 01:09:42 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:45.370225 | orchestrator | 2025-04-10 01:09:42 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:45.370240 | orchestrator | 2025-04-10 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:45.370273 | orchestrator | 2025-04-10 01:09:45 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:45.370458 | orchestrator | 2025-04-10 01:09:45 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:45.370505 | orchestrator | 2025-04-10 01:09:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:45.371119 | orchestrator | 2025-04-10 01:09:45 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:45.371689 | orchestrator | 2025-04-10 01:09:45 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:48.407352 | orchestrator | 2025-04-10 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:48.407494 | orchestrator | 2025-04-10 01:09:48 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:48.408077 | orchestrator | 2025-04-10 01:09:48 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:48.409121 | orchestrator | 2025-04-10 01:09:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:48.410074 | orchestrator | 2025-04-10 01:09:48 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:48.411279 | orchestrator | 2025-04-10 01:09:48 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:51.443154 | orchestrator | 2025-04-10 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:51.443262 | orchestrator | 2025-04-10 01:09:51 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:51.444647 | orchestrator | 2025-04-10 01:09:51 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:51.444703 | orchestrator | 2025-04-10 01:09:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:51.446142 | orchestrator | 2025-04-10 01:09:51 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:51.448444 | orchestrator | 2025-04-10 01:09:51 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:54.477972 | orchestrator | 2025-04-10 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:54.478333 | orchestrator | 2025-04-10 01:09:54 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:54.479055 | orchestrator | 2025-04-10 01:09:54 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:54.479088 | orchestrator | 2025-04-10 01:09:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:54.479111 | orchestrator | 2025-04-10 01:09:54 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:54.479455 | orchestrator | 2025-04-10 01:09:54 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:09:57.527040 | orchestrator | 2025-04-10 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:09:57.527239 | orchestrator | 2025-04-10 01:09:57 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:09:57.527477 | orchestrator | 2025-04-10 01:09:57 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:09:57.528162 | orchestrator | 2025-04-10 01:09:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:09:57.528958 | orchestrator | 2025-04-10 01:09:57 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:09:57.530197 | orchestrator | 2025-04-10 01:09:57 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:00.564809 | orchestrator | 2025-04-10 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:00.564935 | orchestrator | 2025-04-10 01:10:00 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:00.565640 | orchestrator | 2025-04-10 01:10:00 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:00.565752 | orchestrator | 2025-04-10 01:10:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:00.566369 | orchestrator | 2025-04-10 01:10:00 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:00.566407 | orchestrator | 2025-04-10 01:10:00 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:03.609612 | orchestrator | 2025-04-10 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:03.609749 | orchestrator | 2025-04-10 01:10:03 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:03.611139 | orchestrator | 2025-04-10 01:10:03 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:03.612589 | orchestrator | 2025-04-10 01:10:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:03.614220 | orchestrator | 2025-04-10 01:10:03 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:03.615770 | orchestrator | 2025-04-10 01:10:03 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:03.615942 | orchestrator | 2025-04-10 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:06.656739 | orchestrator | 2025-04-10 01:10:06 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:06.659123 | orchestrator | 2025-04-10 01:10:06 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:06.660244 | orchestrator | 2025-04-10 01:10:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:06.662832 | orchestrator | 2025-04-10 01:10:06 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:06.663377 | orchestrator | 2025-04-10 01:10:06 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:06.663490 | orchestrator | 2025-04-10 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:09.725687 | orchestrator | 2025-04-10 01:10:09 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:09.726143 | orchestrator | 2025-04-10 01:10:09 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:09.728125 | orchestrator | 2025-04-10 01:10:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:09.728696 | orchestrator | 2025-04-10 01:10:09 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:09.729532 | orchestrator | 2025-04-10 01:10:09 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:12.779646 | orchestrator | 2025-04-10 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:12.779878 | orchestrator | 2025-04-10 01:10:12 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:12.780534 | orchestrator | 2025-04-10 01:10:12 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:12.780568 | orchestrator | 2025-04-10 01:10:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:12.781392 | orchestrator | 2025-04-10 01:10:12 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:12.782662 | orchestrator | 2025-04-10 01:10:12 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:15.827283 | orchestrator | 2025-04-10 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:15.827534 | orchestrator | 2025-04-10 01:10:15 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:15.828381 | orchestrator | 2025-04-10 01:10:15 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:15.828428 | orchestrator | 2025-04-10 01:10:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:15.830365 | orchestrator | 2025-04-10 01:10:15 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:15.830985 | orchestrator | 2025-04-10 01:10:15 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:18.865594 | orchestrator | 2025-04-10 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:18.865747 | orchestrator | 2025-04-10 01:10:18 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:18.870580 | orchestrator | 2025-04-10 01:10:18 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:21.920674 | orchestrator | 2025-04-10 01:10:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:21.920792 | orchestrator | 2025-04-10 01:10:18 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:21.920812 | orchestrator | 2025-04-10 01:10:18 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:21.920828 | orchestrator | 2025-04-10 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:21.920861 | orchestrator | 2025-04-10 01:10:21 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:21.923299 | orchestrator | 2025-04-10 01:10:21 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:21.924225 | orchestrator | 2025-04-10 01:10:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:21.925811 | orchestrator | 2025-04-10 01:10:21 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:21.926306 | orchestrator | 2025-04-10 01:10:21 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:21.926630 | orchestrator | 2025-04-10 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:24.964775 | orchestrator | 2025-04-10 01:10:24 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:24.965649 | orchestrator | 2025-04-10 01:10:24 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:24.966182 | orchestrator | 2025-04-10 01:10:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:24.967816 | orchestrator | 2025-04-10 01:10:24 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:24.968542 | orchestrator | 2025-04-10 01:10:24 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:28.004101 | orchestrator | 2025-04-10 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:28.004245 | orchestrator | 2025-04-10 01:10:28 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:28.004488 | orchestrator | 2025-04-10 01:10:28 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:28.005280 | orchestrator | 2025-04-10 01:10:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:28.006142 | orchestrator | 2025-04-10 01:10:28 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:28.006834 | orchestrator | 2025-04-10 01:10:28 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:31.056405 | orchestrator | 2025-04-10 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:31.056548 | orchestrator | 2025-04-10 01:10:31 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:31.061158 | orchestrator | 2025-04-10 01:10:31 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:31.062764 | orchestrator | 2025-04-10 01:10:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:31.063417 | orchestrator | 2025-04-10 01:10:31 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:31.064446 | orchestrator | 2025-04-10 01:10:31 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:31.064620 | orchestrator | 2025-04-10 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:34.115282 | orchestrator | 2025-04-10 01:10:34 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:34.116205 | orchestrator | 2025-04-10 01:10:34 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:34.116259 | orchestrator | 2025-04-10 01:10:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:34.116973 | orchestrator | 2025-04-10 01:10:34 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:34.117567 | orchestrator | 2025-04-10 01:10:34 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:34.118858 | orchestrator | 2025-04-10 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:37.168689 | orchestrator | 2025-04-10 01:10:37 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:37.169823 | orchestrator | 2025-04-10 01:10:37 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:37.169866 | orchestrator | 2025-04-10 01:10:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:37.171562 | orchestrator | 2025-04-10 01:10:37 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:37.173681 | orchestrator | 2025-04-10 01:10:37 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:40.229983 | orchestrator | 2025-04-10 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:40.230189 | orchestrator | 2025-04-10 01:10:40 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:40.231397 | orchestrator | 2025-04-10 01:10:40 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:40.232867 | orchestrator | 2025-04-10 01:10:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:40.234188 | orchestrator | 2025-04-10 01:10:40 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:40.235060 | orchestrator | 2025-04-10 01:10:40 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:43.277630 | orchestrator | 2025-04-10 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:43.277885 | orchestrator | 2025-04-10 01:10:43 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:43.279198 | orchestrator | 2025-04-10 01:10:43 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:43.279238 | orchestrator | 2025-04-10 01:10:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:43.279989 | orchestrator | 2025-04-10 01:10:43 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:43.281194 | orchestrator | 2025-04-10 01:10:43 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:46.324739 | orchestrator | 2025-04-10 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:46.324913 | orchestrator | 2025-04-10 01:10:46 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:46.327322 | orchestrator | 2025-04-10 01:10:46 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:46.330650 | orchestrator | 2025-04-10 01:10:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:46.333678 | orchestrator | 2025-04-10 01:10:46 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:46.335409 | orchestrator | 2025-04-10 01:10:46 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:49.391084 | orchestrator | 2025-04-10 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:49.391210 | orchestrator | 2025-04-10 01:10:49 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:49.392710 | orchestrator | 2025-04-10 01:10:49 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:49.394916 | orchestrator | 2025-04-10 01:10:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:49.396165 | orchestrator | 2025-04-10 01:10:49 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:49.397936 | orchestrator | 2025-04-10 01:10:49 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:52.455277 | orchestrator | 2025-04-10 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:52.455417 | orchestrator | 2025-04-10 01:10:52 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:52.457349 | orchestrator | 2025-04-10 01:10:52 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:52.462394 | orchestrator | 2025-04-10 01:10:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:52.464267 | orchestrator | 2025-04-10 01:10:52 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:52.466832 | orchestrator | 2025-04-10 01:10:52 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:55.511333 | orchestrator | 2025-04-10 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:55.511489 | orchestrator | 2025-04-10 01:10:55 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:55.511719 | orchestrator | 2025-04-10 01:10:55 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:55.511749 | orchestrator | 2025-04-10 01:10:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:55.512614 | orchestrator | 2025-04-10 01:10:55 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:55.513834 | orchestrator | 2025-04-10 01:10:55 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:10:58.577757 | orchestrator | 2025-04-10 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:10:58.577883 | orchestrator | 2025-04-10 01:10:58 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:10:58.579827 | orchestrator | 2025-04-10 01:10:58 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:10:58.582668 | orchestrator | 2025-04-10 01:10:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:10:58.583522 | orchestrator | 2025-04-10 01:10:58 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:10:58.586443 | orchestrator | 2025-04-10 01:10:58 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:01.636514 | orchestrator | 2025-04-10 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:01.636656 | orchestrator | 2025-04-10 01:11:01 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:01.638879 | orchestrator | 2025-04-10 01:11:01 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:01.640429 | orchestrator | 2025-04-10 01:11:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:01.642508 | orchestrator | 2025-04-10 01:11:01 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:01.647199 | orchestrator | 2025-04-10 01:11:01 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:04.683356 | orchestrator | 2025-04-10 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:04.683515 | orchestrator | 2025-04-10 01:11:04 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:04.684464 | orchestrator | 2025-04-10 01:11:04 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:04.687760 | orchestrator | 2025-04-10 01:11:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:04.690372 | orchestrator | 2025-04-10 01:11:04 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:04.693399 | orchestrator | 2025-04-10 01:11:04 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:04.694897 | orchestrator | 2025-04-10 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:07.745345 | orchestrator | 2025-04-10 01:11:07 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:07.747591 | orchestrator | 2025-04-10 01:11:07 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:07.749468 | orchestrator | 2025-04-10 01:11:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:07.751066 | orchestrator | 2025-04-10 01:11:07 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:07.752331 | orchestrator | 2025-04-10 01:11:07 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:07.752705 | orchestrator | 2025-04-10 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:10.821449 | orchestrator | 2025-04-10 01:11:10 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:10.824417 | orchestrator | 2025-04-10 01:11:10 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:10.824468 | orchestrator | 2025-04-10 01:11:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:10.826099 | orchestrator | 2025-04-10 01:11:10 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:10.828545 | orchestrator | 2025-04-10 01:11:10 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:10.828907 | orchestrator | 2025-04-10 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:13.868315 | orchestrator | 2025-04-10 01:11:13 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:13.868976 | orchestrator | 2025-04-10 01:11:13 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:13.870857 | orchestrator | 2025-04-10 01:11:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:13.871808 | orchestrator | 2025-04-10 01:11:13 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:13.872956 | orchestrator | 2025-04-10 01:11:13 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:16.920935 | orchestrator | 2025-04-10 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:16.921123 | orchestrator | 2025-04-10 01:11:16 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:16.921473 | orchestrator | 2025-04-10 01:11:16 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:16.922670 | orchestrator | 2025-04-10 01:11:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:16.923572 | orchestrator | 2025-04-10 01:11:16 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:16.923611 | orchestrator | 2025-04-10 01:11:16 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:19.971225 | orchestrator | 2025-04-10 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:19.971397 | orchestrator | 2025-04-10 01:11:19 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:19.972535 | orchestrator | 2025-04-10 01:11:19 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:19.972570 | orchestrator | 2025-04-10 01:11:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:19.973252 | orchestrator | 2025-04-10 01:11:19 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:19.974206 | orchestrator | 2025-04-10 01:11:19 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state STARTED 2025-04-10 01:11:19.977224 | orchestrator | 2025-04-10 01:11:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:23.029406 | orchestrator | 2025-04-10 01:11:23 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:23.031875 | orchestrator | 2025-04-10 01:11:23 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:23.031929 | orchestrator | 2025-04-10 01:11:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:23.033523 | orchestrator | 2025-04-10 01:11:23 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:23.034274 | orchestrator | 2025-04-10 01:11:23 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:23.037061 | orchestrator | 2025-04-10 01:11:23 | INFO  | Task 1a704ae2-2663-4e51-a8a1-394a96814e57 is in state SUCCESS 2025-04-10 01:11:23.040265 | orchestrator | 2025-04-10 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:23.041135 | orchestrator | 2025-04-10 01:11:23.041201 | orchestrator | 2025-04-10 01:11:23.041229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:11:23.041257 | orchestrator | 2025-04-10 01:11:23.041284 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:11:23.041310 | orchestrator | Thursday 10 April 2025 01:06:13 +0000 (0:00:00.541) 0:00:00.541 ******** 2025-04-10 01:11:23.041336 | orchestrator | ok: [testbed-manager] 2025-04-10 01:11:23.041365 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:11:23.041392 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:11:23.041421 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:11:23.041448 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:11:23.041476 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:11:23.041503 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:11:23.041618 | orchestrator | 2025-04-10 01:11:23.041667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:11:23.041684 | orchestrator | Thursday 10 April 2025 01:06:15 +0000 (0:00:02.662) 0:00:03.203 ******** 2025-04-10 01:11:23.041701 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041717 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041733 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041749 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041765 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041781 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041797 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-04-10 01:11:23.041810 | orchestrator | 2025-04-10 01:11:23.041825 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-04-10 01:11:23.041838 | orchestrator | 2025-04-10 01:11:23.041852 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-10 01:11:23.041866 | orchestrator | Thursday 10 April 2025 01:06:17 +0000 (0:00:01.789) 0:00:04.993 ******** 2025-04-10 01:11:23.041881 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:11:23.041897 | orchestrator | 2025-04-10 01:11:23.041911 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-04-10 01:11:23.041924 | orchestrator | Thursday 10 April 2025 01:06:19 +0000 (0:00:02.006) 0:00:07.000 ******** 2025-04-10 01:11:23.041941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.041981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.041997 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 01:11:23.042150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.042205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.042229 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042268 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.042311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.042326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.042361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.042378 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.042401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.042475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.042496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.042511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.042589 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 01:11:23.042611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042651 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.042673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.042702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.042728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.042751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.042766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.042789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.042807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.042917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.042970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.043192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.043257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.043300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.043315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.043380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.043402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.043455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.043470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.043550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.043572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.043612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.043664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.043739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.043776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.043805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.043834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.043890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.043905 | orchestrator | 2025-04-10 01:11:23.043919 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-10 01:11:23.043934 | orchestrator | Thursday 10 April 2025 01:06:24 +0000 (0:00:04.865) 0:00:11.866 ******** 2025-04-10 01:11:23.043948 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:11:23.043962 | orchestrator | 2025-04-10 01:11:23.043977 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-04-10 01:11:23.043990 | orchestrator | Thursday 10 April 2025 01:06:28 +0000 (0:00:04.297) 0:00:16.163 ******** 2025-04-10 01:11:23.044005 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 01:11:23.044053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.044214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044229 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 01:11:23.044341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.044908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.044955 | orchestrator | 2025-04-10 01:11:23.044970 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-04-10 01:11:23.044988 | orchestrator | Thursday 10 April 2025 01:06:36 +0000 (0:00:07.690) 0:00:23.853 ******** 2025-04-10 01:11:23.045050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.045163 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.045769 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.045863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.045893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.045920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.045947 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.045974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.046000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.046140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.046170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.046210 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.046725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.046782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.046800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.046820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.046845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.046891 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.046932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.046949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.047362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047382 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.047409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047469 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.047499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.047526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047555 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.047577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.047592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047621 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.047635 | orchestrator | 2025-04-10 01:11:23.047647 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-04-10 01:11:23.047660 | orchestrator | Thursday 10 April 2025 01:06:40 +0000 (0:00:03.715) 0:00:27.569 ******** 2025-04-10 01:11:23.047691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.047711 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.047725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.047758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.047784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047841 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.047853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047866 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.047879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.047896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.047950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.047963 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.047985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.048003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.048136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048352 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.048366 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.048385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.048399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048425 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.048446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-10 01:11:23.048460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.048506 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.048519 | orchestrator | 2025-04-10 01:11:23.048532 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-04-10 01:11:23.048545 | orchestrator | Thursday 10 April 2025 01:06:44 +0000 (0:00:04.109) 0:00:31.679 ******** 2025-04-10 01:11:23.048558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.048571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.048590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.048604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.048633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.048647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048674 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 01:11:23.048693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.048708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.048850 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048882 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.048909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.048943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.048963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.048984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.048998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.049071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.049158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.049171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.049211 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.049257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.049284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.049298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.049363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.049454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.049466 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 01:11:23.049485 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.049496 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.049507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.049517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.049538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.050080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.050106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.050137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.050194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.050207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.050218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.050229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.050261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.050277 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.050308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.050350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.050372 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.050399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.050442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.050453 | orchestrator | 2025-04-10 01:11:23.050464 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-04-10 01:11:23.050479 | orchestrator | Thursday 10 April 2025 01:06:55 +0000 (0:00:11.082) 0:00:42.762 ******** 2025-04-10 01:11:23.050489 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 01:11:23.050500 | orchestrator | 2025-04-10 01:11:23.050510 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-04-10 01:11:23.050521 | orchestrator | Thursday 10 April 2025 01:06:55 +0000 (0:00:00.471) 0:00:43.233 ******** 2025-04-10 01:11:23.050540 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050551 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050562 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050607 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050618 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050638 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050651 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050683 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050695 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081174, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0005457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.050724 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050745 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050758 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050770 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050782 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050799 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050810 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050838 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050858 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050869 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050880 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050891 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050908 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050940 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081190, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.050970 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050982 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.050993 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051041 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051053 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051073 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051105 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051117 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051127 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051143 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051154 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051165 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051214 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051237 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051263 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051282 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051294 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081177, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0015457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051323 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051335 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051346 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051363 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051382 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051393 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051404 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051433 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051445 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051462 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051472 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051491 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051502 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051513 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051524 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.051554 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051566 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051582 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081188, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051601 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051612 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051623 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.051634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051644 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.051655 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051665 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.051694 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051711 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.051722 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-10 01:11:23.051733 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.051744 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081216, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051762 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081197, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051774 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081186, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051784 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081196, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0035458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051813 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081214, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.007546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051825 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081183, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244092.0025458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051844 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081202, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244092.0045457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-10 01:11:23.051855 | orchestrator | 2025-04-10 01:11:23.051865 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-04-10 01:11:23.051875 | orchestrator | Thursday 10 April 2025 01:07:33 +0000 (0:00:37.953) 0:01:21.187 ******** 2025-04-10 01:11:23.051886 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 01:11:23.051896 | orchestrator | 2025-04-10 01:11:23.051906 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-04-10 01:11:23.051917 | orchestrator | Thursday 10 April 2025 01:07:34 +0000 (0:00:00.422) 0:01:21.609 ******** 2025-04-10 01:11:23.051927 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.051937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.051947 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.051958 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.051968 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.051978 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 01:11:23.051989 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.051999 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052009 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.052162 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052175 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.052186 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:11:23.052196 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.052206 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052217 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.052227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052237 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.052247 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.052257 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052268 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.052278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052288 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.052298 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.052308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052318 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.052328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052339 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.052350 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.052369 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052379 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.052387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052396 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.052405 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.052413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052422 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-04-10 01:11:23.052430 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-10 01:11:23.052439 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-04-10 01:11:23.052447 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-10 01:11:23.052456 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-10 01:11:23.052465 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-10 01:11:23.052473 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-10 01:11:23.052513 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-10 01:11:23.052523 | orchestrator | 2025-04-10 01:11:23.052532 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-04-10 01:11:23.052541 | orchestrator | Thursday 10 April 2025 01:07:35 +0000 (0:00:01.466) 0:01:23.076 ******** 2025-04-10 01:11:23.052549 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-10 01:11:23.052558 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.052567 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-10 01:11:23.052576 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.052585 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-10 01:11:23.052593 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.052602 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-10 01:11:23.052611 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.052619 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-10 01:11:23.052628 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.052636 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-10 01:11:23.052645 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.052653 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-04-10 01:11:23.052662 | orchestrator | 2025-04-10 01:11:23.052671 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-04-10 01:11:23.052679 | orchestrator | Thursday 10 April 2025 01:08:01 +0000 (0:00:26.110) 0:01:49.186 ******** 2025-04-10 01:11:23.052688 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-10 01:11:23.052696 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.052705 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-10 01:11:23.052714 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.052722 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-10 01:11:23.052731 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.052740 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-10 01:11:23.052748 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.052757 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-10 01:11:23.052766 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.052779 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-10 01:11:23.052788 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.052796 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-04-10 01:11:23.052805 | orchestrator | 2025-04-10 01:11:23.052814 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-04-10 01:11:23.052822 | orchestrator | Thursday 10 April 2025 01:08:06 +0000 (0:00:04.676) 0:01:53.863 ******** 2025-04-10 01:11:23.052831 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-10 01:11:23.052841 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.052849 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-10 01:11:23.052858 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.052867 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-10 01:11:23.052875 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.052884 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-10 01:11:23.052892 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.052901 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-10 01:11:23.052909 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.052918 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-10 01:11:23.052927 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.052935 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-04-10 01:11:23.052944 | orchestrator | 2025-04-10 01:11:23.052953 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-04-10 01:11:23.052961 | orchestrator | Thursday 10 April 2025 01:08:10 +0000 (0:00:04.026) 0:01:57.889 ******** 2025-04-10 01:11:23.052970 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 01:11:23.052979 | orchestrator | 2025-04-10 01:11:23.052987 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-04-10 01:11:23.052996 | orchestrator | Thursday 10 April 2025 01:08:11 +0000 (0:00:00.625) 0:01:58.515 ******** 2025-04-10 01:11:23.053005 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.053039 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053049 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053058 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.053066 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053075 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053083 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053092 | orchestrator | 2025-04-10 01:11:23.053100 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-04-10 01:11:23.053114 | orchestrator | Thursday 10 April 2025 01:08:11 +0000 (0:00:00.867) 0:01:59.382 ******** 2025-04-10 01:11:23.053123 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.053131 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053140 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053148 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053157 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:23.053165 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:23.053174 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:23.053182 | orchestrator | 2025-04-10 01:11:23.053191 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-04-10 01:11:23.053205 | orchestrator | Thursday 10 April 2025 01:08:16 +0000 (0:00:04.505) 0:02:03.887 ******** 2025-04-10 01:11:23.053214 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053222 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053231 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053240 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053254 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053263 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.053276 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053286 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053294 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053303 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053312 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053320 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053329 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-10 01:11:23.053337 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.053346 | orchestrator | 2025-04-10 01:11:23.053354 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-04-10 01:11:23.053363 | orchestrator | Thursday 10 April 2025 01:08:20 +0000 (0:00:04.589) 0:02:08.476 ******** 2025-04-10 01:11:23.053372 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-10 01:11:23.053381 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053389 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-10 01:11:23.053398 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053406 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-10 01:11:23.053415 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053423 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-10 01:11:23.053432 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.053441 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-10 01:11:23.053449 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053458 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-10 01:11:23.053466 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053475 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-04-10 01:11:23.053484 | orchestrator | 2025-04-10 01:11:23.053492 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-04-10 01:11:23.053501 | orchestrator | Thursday 10 April 2025 01:08:25 +0000 (0:00:04.788) 0:02:13.265 ******** 2025-04-10 01:11:23.053510 | orchestrator | [WARNING]: Skipped 2025-04-10 01:11:23.053518 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-04-10 01:11:23.053527 | orchestrator | due to this access issue: 2025-04-10 01:11:23.053535 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-04-10 01:11:23.053544 | orchestrator | not a directory 2025-04-10 01:11:23.053552 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-10 01:11:23.053561 | orchestrator | 2025-04-10 01:11:23.053570 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-04-10 01:11:23.053578 | orchestrator | Thursday 10 April 2025 01:08:28 +0000 (0:00:02.815) 0:02:16.081 ******** 2025-04-10 01:11:23.053594 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.053602 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053611 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053620 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.053628 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053637 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053645 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053654 | orchestrator | 2025-04-10 01:11:23.053662 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-04-10 01:11:23.053671 | orchestrator | Thursday 10 April 2025 01:08:30 +0000 (0:00:02.066) 0:02:18.147 ******** 2025-04-10 01:11:23.053684 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.053693 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053702 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053711 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.053720 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053728 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053737 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053745 | orchestrator | 2025-04-10 01:11:23.053754 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-04-10 01:11:23.053763 | orchestrator | Thursday 10 April 2025 01:08:31 +0000 (0:00:01.027) 0:02:19.175 ******** 2025-04-10 01:11:23.053771 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053780 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053788 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053797 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053806 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053814 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053823 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053832 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.053841 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053849 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.053858 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053866 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.053875 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-10 01:11:23.053884 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.053893 | orchestrator | 2025-04-10 01:11:23.053901 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-04-10 01:11:23.053910 | orchestrator | Thursday 10 April 2025 01:08:35 +0000 (0:00:04.290) 0:02:23.465 ******** 2025-04-10 01:11:23.053919 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.053927 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:23.053936 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.053945 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:23.053953 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.053962 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:23.053970 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.053979 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.053992 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:23.054001 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:23.054056 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.054069 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:23.054078 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-10 01:11:23.054087 | orchestrator | skipping: [testbed-manager] 2025-04-10 01:11:23.054096 | orchestrator | 2025-04-10 01:11:23.054104 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-04-10 01:11:23.054113 | orchestrator | Thursday 10 April 2025 01:08:40 +0000 (0:00:04.921) 0:02:28.387 ******** 2025-04-10 01:11:23.054135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.054153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.054163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.054172 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-10 01:11:23.054194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.054204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.054238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054256 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-10 01:11:23.054337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054401 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.054440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-10 01:11:23.054481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.054538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-10 01:11:23.054609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054622 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.054679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.054741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.054806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-10 01:11:23.054851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-10 01:11:23.054861 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.054879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-10 01:11:23.054903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.054948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.054966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.054981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.054990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.055007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-10 01:11:23.055041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.055050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-10 01:11:23.055059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-10 01:11:23.055068 | orchestrator | 2025-04-10 01:11:23.055077 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-04-10 01:11:23.055085 | orchestrator | Thursday 10 April 2025 01:08:47 +0000 (0:00:06.217) 0:02:34.604 ******** 2025-04-10 01:11:23.055094 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-10 01:11:23.055103 | orchestrator | 2025-04-10 01:11:23.055111 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055120 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:03.382) 0:02:37.987 ******** 2025-04-10 01:11:23.055128 | orchestrator | 2025-04-10 01:11:23.055137 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055146 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:00.065) 0:02:38.052 ******** 2025-04-10 01:11:23.055154 | orchestrator | 2025-04-10 01:11:23.055163 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055171 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:00.247) 0:02:38.299 ******** 2025-04-10 01:11:23.055180 | orchestrator | 2025-04-10 01:11:23.055188 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055197 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:00.057) 0:02:38.357 ******** 2025-04-10 01:11:23.055205 | orchestrator | 2025-04-10 01:11:23.055214 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055222 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:00.056) 0:02:38.413 ******** 2025-04-10 01:11:23.055231 | orchestrator | 2025-04-10 01:11:23.055239 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055248 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:00.053) 0:02:38.467 ******** 2025-04-10 01:11:23.055257 | orchestrator | 2025-04-10 01:11:23.055265 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-10 01:11:23.055277 | orchestrator | Thursday 10 April 2025 01:08:51 +0000 (0:00:00.286) 0:02:38.753 ******** 2025-04-10 01:11:23.055286 | orchestrator | 2025-04-10 01:11:23.055294 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-04-10 01:11:23.055303 | orchestrator | Thursday 10 April 2025 01:08:51 +0000 (0:00:00.093) 0:02:38.847 ******** 2025-04-10 01:11:23.055311 | orchestrator | changed: [testbed-manager] 2025-04-10 01:11:23.055320 | orchestrator | 2025-04-10 01:11:23.055328 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-04-10 01:11:23.055337 | orchestrator | Thursday 10 April 2025 01:09:12 +0000 (0:00:20.674) 0:02:59.521 ******** 2025-04-10 01:11:23.055345 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:23.055354 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:11:23.055362 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:11:23.055371 | orchestrator | changed: [testbed-manager] 2025-04-10 01:11:23.055380 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:11:23.055388 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:23.055397 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:23.055405 | orchestrator | 2025-04-10 01:11:23.055414 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-04-10 01:11:23.055426 | orchestrator | Thursday 10 April 2025 01:09:34 +0000 (0:00:22.565) 0:03:22.086 ******** 2025-04-10 01:11:23.055435 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:23.055447 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:23.055456 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:23.055465 | orchestrator | 2025-04-10 01:11:23.055473 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-04-10 01:11:23.055482 | orchestrator | Thursday 10 April 2025 01:09:53 +0000 (0:00:18.545) 0:03:40.631 ******** 2025-04-10 01:11:23.055490 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:23.055499 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:23.055508 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:23.055516 | orchestrator | 2025-04-10 01:11:23.055524 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-04-10 01:11:23.055533 | orchestrator | Thursday 10 April 2025 01:10:07 +0000 (0:00:14.523) 0:03:55.155 ******** 2025-04-10 01:11:23.055542 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:11:23.055550 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:11:23.055559 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:23.055567 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:23.055576 | orchestrator | changed: [testbed-manager] 2025-04-10 01:11:23.055584 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:11:23.055593 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:23.055601 | orchestrator | 2025-04-10 01:11:23.055610 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-04-10 01:11:23.055618 | orchestrator | Thursday 10 April 2025 01:10:29 +0000 (0:00:21.559) 0:04:16.715 ******** 2025-04-10 01:11:23.055627 | orchestrator | changed: [testbed-manager] 2025-04-10 01:11:23.055635 | orchestrator | 2025-04-10 01:11:23.055644 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-04-10 01:11:23.055653 | orchestrator | Thursday 10 April 2025 01:10:40 +0000 (0:00:11.098) 0:04:27.813 ******** 2025-04-10 01:11:23.055661 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:23.055670 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:23.055679 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:23.055687 | orchestrator | 2025-04-10 01:11:23.055696 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-04-10 01:11:23.055704 | orchestrator | Thursday 10 April 2025 01:10:53 +0000 (0:00:12.884) 0:04:40.698 ******** 2025-04-10 01:11:23.055713 | orchestrator | changed: [testbed-manager] 2025-04-10 01:11:23.055722 | orchestrator | 2025-04-10 01:11:23.055730 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-04-10 01:11:23.055739 | orchestrator | Thursday 10 April 2025 01:11:08 +0000 (0:00:15.124) 0:04:55.822 ******** 2025-04-10 01:11:23.055752 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:11:23.055761 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:11:23.055769 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:11:23.055778 | orchestrator | 2025-04-10 01:11:23.055787 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:11:23.055795 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-10 01:11:23.055805 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-10 01:11:23.055814 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-10 01:11:23.055823 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-10 01:11:23.055831 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-10 01:11:23.055839 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-10 01:11:23.055848 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-10 01:11:23.055857 | orchestrator | 2025-04-10 01:11:23.055865 | orchestrator | 2025-04-10 01:11:23.055874 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:11:23.055883 | orchestrator | Thursday 10 April 2025 01:11:20 +0000 (0:00:12.127) 0:05:07.950 ******** 2025-04-10 01:11:23.055891 | orchestrator | =============================================================================== 2025-04-10 01:11:23.055900 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 37.95s 2025-04-10 01:11:23.055908 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 26.11s 2025-04-10 01:11:23.055917 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 22.57s 2025-04-10 01:11:23.055925 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 21.56s 2025-04-10 01:11:23.055934 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.67s 2025-04-10 01:11:23.055942 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 18.55s 2025-04-10 01:11:23.055954 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 15.12s 2025-04-10 01:11:23.055963 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 14.52s 2025-04-10 01:11:23.055971 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.88s 2025-04-10 01:11:23.055980 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.13s 2025-04-10 01:11:23.055991 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.10s 2025-04-10 01:11:26.095317 | orchestrator | prometheus : Copying over config.json files ---------------------------- 11.08s 2025-04-10 01:11:26.095445 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.69s 2025-04-10 01:11:26.095466 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.22s 2025-04-10 01:11:26.095482 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 4.92s 2025-04-10 01:11:26.095496 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.87s 2025-04-10 01:11:26.095510 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.79s 2025-04-10 01:11:26.095525 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.68s 2025-04-10 01:11:26.095567 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 4.59s 2025-04-10 01:11:26.095582 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.51s 2025-04-10 01:11:26.095614 | orchestrator | 2025-04-10 01:11:26 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:26.098335 | orchestrator | 2025-04-10 01:11:26 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:26.103186 | orchestrator | 2025-04-10 01:11:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:26.104770 | orchestrator | 2025-04-10 01:11:26 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state STARTED 2025-04-10 01:11:26.107008 | orchestrator | 2025-04-10 01:11:26 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:29.144304 | orchestrator | 2025-04-10 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:29.144407 | orchestrator | 2025-04-10 01:11:29 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:29.146495 | orchestrator | 2025-04-10 01:11:29 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:29.148005 | orchestrator | 2025-04-10 01:11:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:29.151720 | orchestrator | 2025-04-10 01:11:29.153508 | orchestrator | 2025-04-10 01:11:29 | INFO  | Task 58609288-d083-4640-beb6-1dfcac54e528 is in state SUCCESS 2025-04-10 01:11:29.153542 | orchestrator | 2025-04-10 01:11:29.153550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:11:29.153558 | orchestrator | 2025-04-10 01:11:29.153565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:11:29.153573 | orchestrator | Thursday 10 April 2025 01:08:11 +0000 (0:00:00.338) 0:00:00.338 ******** 2025-04-10 01:11:29.153580 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:11:29.153589 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:11:29.153596 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:11:29.153603 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:11:29.153611 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:11:29.153618 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:11:29.153625 | orchestrator | 2025-04-10 01:11:29.153632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:11:29.153639 | orchestrator | Thursday 10 April 2025 01:08:12 +0000 (0:00:00.897) 0:00:01.236 ******** 2025-04-10 01:11:29.153646 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-04-10 01:11:29.153654 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-04-10 01:11:29.153661 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-04-10 01:11:29.153667 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-04-10 01:11:29.153674 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-04-10 01:11:29.153681 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-04-10 01:11:29.153697 | orchestrator | 2025-04-10 01:11:29.153704 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-04-10 01:11:29.153711 | orchestrator | 2025-04-10 01:11:29.153718 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-10 01:11:29.153725 | orchestrator | Thursday 10 April 2025 01:08:14 +0000 (0:00:02.043) 0:00:03.279 ******** 2025-04-10 01:11:29.153732 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:11:29.153741 | orchestrator | 2025-04-10 01:11:29.153774 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-04-10 01:11:29.153783 | orchestrator | Thursday 10 April 2025 01:08:16 +0000 (0:00:01.683) 0:00:04.962 ******** 2025-04-10 01:11:29.153892 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-04-10 01:11:29.154430 | orchestrator | 2025-04-10 01:11:29.154440 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-04-10 01:11:29.154448 | orchestrator | Thursday 10 April 2025 01:08:20 +0000 (0:00:04.020) 0:00:08.983 ******** 2025-04-10 01:11:29.154456 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-04-10 01:11:29.154464 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-04-10 01:11:29.154471 | orchestrator | 2025-04-10 01:11:29.154479 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-04-10 01:11:29.154486 | orchestrator | Thursday 10 April 2025 01:08:27 +0000 (0:00:07.338) 0:00:16.321 ******** 2025-04-10 01:11:29.154522 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:11:29.154532 | orchestrator | 2025-04-10 01:11:29.154539 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-04-10 01:11:29.154989 | orchestrator | Thursday 10 April 2025 01:08:31 +0000 (0:00:03.624) 0:00:19.946 ******** 2025-04-10 01:11:29.155101 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:11:29.155115 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-04-10 01:11:29.155122 | orchestrator | 2025-04-10 01:11:29.155129 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-04-10 01:11:29.155136 | orchestrator | Thursday 10 April 2025 01:08:35 +0000 (0:00:03.946) 0:00:23.892 ******** 2025-04-10 01:11:29.155143 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:11:29.155150 | orchestrator | 2025-04-10 01:11:29.155166 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-04-10 01:11:29.155173 | orchestrator | Thursday 10 April 2025 01:08:39 +0000 (0:00:03.599) 0:00:27.492 ******** 2025-04-10 01:11:29.155180 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-04-10 01:11:29.155187 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-04-10 01:11:29.155194 | orchestrator | 2025-04-10 01:11:29.155201 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-04-10 01:11:29.155208 | orchestrator | Thursday 10 April 2025 01:08:48 +0000 (0:00:08.979) 0:00:36.472 ******** 2025-04-10 01:11:29.155262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.155275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.155300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.155323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.155354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.155369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.155384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.155831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.155890 | orchestrator | 2025-04-10 01:11:29.155897 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-10 01:11:29.155905 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:02.583) 0:00:39.056 ******** 2025-04-10 01:11:29.155912 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.155919 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.155927 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.155934 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:11:29.155941 | orchestrator | 2025-04-10 01:11:29.155948 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-04-10 01:11:29.155955 | orchestrator | Thursday 10 April 2025 01:08:51 +0000 (0:00:01.253) 0:00:40.309 ******** 2025-04-10 01:11:29.155962 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-04-10 01:11:29.155970 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-04-10 01:11:29.155977 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-04-10 01:11:29.155984 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-04-10 01:11:29.155990 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-04-10 01:11:29.155998 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-04-10 01:11:29.156005 | orchestrator | 2025-04-10 01:11:29.156030 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-04-10 01:11:29.156038 | orchestrator | Thursday 10 April 2025 01:08:55 +0000 (0:00:03.793) 0:00:44.103 ******** 2025-04-10 01:11:29.156046 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-10 01:11:29.156055 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-10 01:11:29.156085 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-10 01:11:29.156094 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-10 01:11:29.156102 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-10 01:11:29.156119 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-10 01:11:29.156127 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-10 01:11:29.156157 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-10 01:11:29.156166 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-10 01:11:29.156174 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-10 01:11:29.156182 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-10 01:11:29.156209 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-10 01:11:29.156219 | orchestrator | 2025-04-10 01:11:29.156226 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-04-10 01:11:29.156233 | orchestrator | Thursday 10 April 2025 01:09:00 +0000 (0:00:04.700) 0:00:48.803 ******** 2025-04-10 01:11:29.156240 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:11:29.156248 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:11:29.156255 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:11:29.156262 | orchestrator | 2025-04-10 01:11:29.156269 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-04-10 01:11:29.156276 | orchestrator | Thursday 10 April 2025 01:09:03 +0000 (0:00:02.918) 0:00:51.722 ******** 2025-04-10 01:11:29.156283 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-04-10 01:11:29.156290 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-04-10 01:11:29.156297 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-04-10 01:11:29.156304 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-04-10 01:11:29.156311 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-04-10 01:11:29.156318 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-04-10 01:11:29.156326 | orchestrator | 2025-04-10 01:11:29.156334 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-04-10 01:11:29.156342 | orchestrator | Thursday 10 April 2025 01:09:07 +0000 (0:00:04.327) 0:00:56.050 ******** 2025-04-10 01:11:29.156349 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-04-10 01:11:29.156357 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-04-10 01:11:29.156365 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-04-10 01:11:29.156373 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-04-10 01:11:29.156381 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-04-10 01:11:29.156389 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-04-10 01:11:29.156397 | orchestrator | 2025-04-10 01:11:29.156405 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-04-10 01:11:29.156412 | orchestrator | Thursday 10 April 2025 01:09:09 +0000 (0:00:01.691) 0:00:57.741 ******** 2025-04-10 01:11:29.156420 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.156428 | orchestrator | 2025-04-10 01:11:29.156436 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-04-10 01:11:29.156444 | orchestrator | Thursday 10 April 2025 01:09:09 +0000 (0:00:00.128) 0:00:57.870 ******** 2025-04-10 01:11:29.156452 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.156459 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.156467 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.156475 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.156482 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.156490 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.156502 | orchestrator | 2025-04-10 01:11:29.156510 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-10 01:11:29.156518 | orchestrator | Thursday 10 April 2025 01:09:11 +0000 (0:00:01.727) 0:00:59.598 ******** 2025-04-10 01:11:29.156527 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:11:29.156536 | orchestrator | 2025-04-10 01:11:29.156544 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-04-10 01:11:29.156552 | orchestrator | Thursday 10 April 2025 01:09:15 +0000 (0:00:04.028) 0:01:03.626 ******** 2025-04-10 01:11:29.156560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.156595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.156605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.156639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.156720 | orchestrator | 2025-04-10 01:11:29.156727 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-04-10 01:11:29.156735 | orchestrator | Thursday 10 April 2025 01:09:20 +0000 (0:00:05.646) 0:01:09.272 ******** 2025-04-10 01:11:29.156758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.156772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.156791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.156822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156831 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.156844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156866 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.156874 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.156881 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.156888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156903 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.156931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156954 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.156961 | orchestrator | 2025-04-10 01:11:29.156968 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-04-10 01:11:29.156975 | orchestrator | Thursday 10 April 2025 01:09:24 +0000 (0:00:03.780) 0:01:13.053 ******** 2025-04-10 01:11:29.156982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.156990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.156997 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.157004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157076 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.157091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157112 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.157119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157134 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.157163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157184 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.157192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157206 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.157214 | orchestrator | 2025-04-10 01:11:29.157221 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-04-10 01:11:29.157231 | orchestrator | Thursday 10 April 2025 01:09:28 +0000 (0:00:03.408) 0:01:16.461 ******** 2025-04-10 01:11:29.157238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.157375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.157416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.157459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157638 | orchestrator | 2025-04-10 01:11:29.157645 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-04-10 01:11:29.157653 | orchestrator | Thursday 10 April 2025 01:09:31 +0000 (0:00:03.672) 0:01:20.133 ******** 2025-04-10 01:11:29.157660 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-10 01:11:29.157667 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.157675 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-10 01:11:29.157682 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.157689 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-10 01:11:29.157696 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.157706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-10 01:11:29.157713 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-10 01:11:29.157720 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-10 01:11:29.157727 | orchestrator | 2025-04-10 01:11:29.157734 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-04-10 01:11:29.157741 | orchestrator | Thursday 10 April 2025 01:09:34 +0000 (0:00:02.884) 0:01:23.017 ******** 2025-04-10 01:11:29.157748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.157800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.157825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.157845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.157852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.157987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.157995 | orchestrator | 2025-04-10 01:11:29.158005 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-04-10 01:11:29.158071 | orchestrator | Thursday 10 April 2025 01:09:48 +0000 (0:00:14.338) 0:01:37.355 ******** 2025-04-10 01:11:29.158082 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.158091 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.158098 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.158106 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:11:29.158114 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:11:29.158122 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:11:29.158130 | orchestrator | 2025-04-10 01:11:29.158138 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-04-10 01:11:29.158146 | orchestrator | Thursday 10 April 2025 01:09:51 +0000 (0:00:02.874) 0:01:40.230 ******** 2025-04-10 01:11:29.158154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158287 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.158295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158342 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.158350 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.158358 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.158366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158408 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.158419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158460 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.158467 | orchestrator | 2025-04-10 01:11:29.158474 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-04-10 01:11:29.158481 | orchestrator | Thursday 10 April 2025 01:09:53 +0000 (0:00:02.049) 0:01:42.279 ******** 2025-04-10 01:11:29.158488 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.158495 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.158502 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.158509 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.158515 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.158522 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.158529 | orchestrator | 2025-04-10 01:11:29.158536 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-04-10 01:11:29.158543 | orchestrator | Thursday 10 April 2025 01:09:55 +0000 (0:00:02.081) 0:01:44.361 ******** 2025-04-10 01:11:29.158554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-10 01:11:29.158600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.158619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.158631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-10 01:11:29.158654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-10 01:11:29.158775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-10 01:11:29.158794 | orchestrator | 2025-04-10 01:11:29.158801 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-10 01:11:29.158809 | orchestrator | Thursday 10 April 2025 01:09:59 +0000 (0:00:03.748) 0:01:48.110 ******** 2025-04-10 01:11:29.158816 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:29.158823 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:11:29.158830 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:11:29.158837 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:11:29.158844 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:11:29.158851 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:11:29.158858 | orchestrator | 2025-04-10 01:11:29.158865 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-04-10 01:11:29.158872 | orchestrator | Thursday 10 April 2025 01:10:00 +0000 (0:00:00.653) 0:01:48.763 ******** 2025-04-10 01:11:29.158879 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:29.158886 | orchestrator | 2025-04-10 01:11:29.158893 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-04-10 01:11:29.158900 | orchestrator | Thursday 10 April 2025 01:10:03 +0000 (0:00:02.672) 0:01:51.435 ******** 2025-04-10 01:11:29.158907 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:29.158914 | orchestrator | 2025-04-10 01:11:29.158921 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-04-10 01:11:29.158929 | orchestrator | Thursday 10 April 2025 01:10:05 +0000 (0:00:02.567) 0:01:54.003 ******** 2025-04-10 01:11:29.158936 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:29.158942 | orchestrator | 2025-04-10 01:11:29.158949 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-10 01:11:29.158956 | orchestrator | Thursday 10 April 2025 01:10:23 +0000 (0:00:17.591) 0:02:11.595 ******** 2025-04-10 01:11:29.158963 | orchestrator | 2025-04-10 01:11:29.158970 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-10 01:11:29.158977 | orchestrator | Thursday 10 April 2025 01:10:23 +0000 (0:00:00.095) 0:02:11.690 ******** 2025-04-10 01:11:29.158985 | orchestrator | 2025-04-10 01:11:29.158991 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-10 01:11:29.158999 | orchestrator | Thursday 10 April 2025 01:10:23 +0000 (0:00:00.300) 0:02:11.990 ******** 2025-04-10 01:11:29.159006 | orchestrator | 2025-04-10 01:11:29.159052 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-10 01:11:29.159061 | orchestrator | Thursday 10 April 2025 01:10:23 +0000 (0:00:00.073) 0:02:12.064 ******** 2025-04-10 01:11:29.159068 | orchestrator | 2025-04-10 01:11:29.159075 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-10 01:11:29.159249 | orchestrator | Thursday 10 April 2025 01:10:23 +0000 (0:00:00.083) 0:02:12.148 ******** 2025-04-10 01:11:29.159258 | orchestrator | 2025-04-10 01:11:29.159265 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-10 01:11:29.159278 | orchestrator | Thursday 10 April 2025 01:10:23 +0000 (0:00:00.095) 0:02:12.243 ******** 2025-04-10 01:11:29.159285 | orchestrator | 2025-04-10 01:11:29.159292 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-04-10 01:11:29.159299 | orchestrator | Thursday 10 April 2025 01:10:24 +0000 (0:00:00.507) 0:02:12.751 ******** 2025-04-10 01:11:29.159306 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:29.159313 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:29.159320 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:29.159327 | orchestrator | 2025-04-10 01:11:29.159334 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-04-10 01:11:29.159341 | orchestrator | Thursday 10 April 2025 01:10:46 +0000 (0:00:22.606) 0:02:35.358 ******** 2025-04-10 01:11:29.159348 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:11:29.159355 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:11:29.159362 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:11:29.159369 | orchestrator | 2025-04-10 01:11:29.159376 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-04-10 01:11:29.159387 | orchestrator | Thursday 10 April 2025 01:10:52 +0000 (0:00:05.383) 0:02:40.742 ******** 2025-04-10 01:11:32.196532 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:11:32.196619 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:11:32.196628 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:11:32.196635 | orchestrator | 2025-04-10 01:11:32.196643 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-04-10 01:11:32.196650 | orchestrator | Thursday 10 April 2025 01:11:15 +0000 (0:00:23.014) 0:03:03.756 ******** 2025-04-10 01:11:32.196656 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:11:32.196662 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:11:32.196667 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:11:32.196673 | orchestrator | 2025-04-10 01:11:32.196679 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-04-10 01:11:32.196686 | orchestrator | Thursday 10 April 2025 01:11:26 +0000 (0:00:11.376) 0:03:15.133 ******** 2025-04-10 01:11:32.196692 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:11:32.196697 | orchestrator | 2025-04-10 01:11:32.196703 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:11:32.196710 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-10 01:11:32.196717 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-10 01:11:32.196723 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-10 01:11:32.196729 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:11:32.196735 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:11:32.196740 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:11:32.196746 | orchestrator | 2025-04-10 01:11:32.196752 | orchestrator | 2025-04-10 01:11:32.196757 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:11:32.196763 | orchestrator | Thursday 10 April 2025 01:11:27 +0000 (0:00:00.673) 0:03:15.807 ******** 2025-04-10 01:11:32.196769 | orchestrator | =============================================================================== 2025-04-10 01:11:32.196774 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.01s 2025-04-10 01:11:32.196780 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.61s 2025-04-10 01:11:32.196806 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.59s 2025-04-10 01:11:32.196812 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.34s 2025-04-10 01:11:32.196818 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.38s 2025-04-10 01:11:32.196824 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.98s 2025-04-10 01:11:32.196829 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.34s 2025-04-10 01:11:32.196835 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.65s 2025-04-10 01:11:32.196841 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.38s 2025-04-10 01:11:32.196847 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.70s 2025-04-10 01:11:32.196852 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.33s 2025-04-10 01:11:32.196858 | orchestrator | cinder : include_tasks -------------------------------------------------- 4.03s 2025-04-10 01:11:32.196864 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.02s 2025-04-10 01:11:32.196870 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.95s 2025-04-10 01:11:32.196876 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.79s 2025-04-10 01:11:32.196882 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 3.78s 2025-04-10 01:11:32.196897 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.75s 2025-04-10 01:11:32.196904 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.67s 2025-04-10 01:11:32.196909 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.62s 2025-04-10 01:11:32.196915 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.60s 2025-04-10 01:11:32.196921 | orchestrator | 2025-04-10 01:11:29 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:32.196928 | orchestrator | 2025-04-10 01:11:29 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:32.196934 | orchestrator | 2025-04-10 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:32.196952 | orchestrator | 2025-04-10 01:11:32 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:32.197458 | orchestrator | 2025-04-10 01:11:32 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:32.197977 | orchestrator | 2025-04-10 01:11:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:32.198825 | orchestrator | 2025-04-10 01:11:32 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:32.199558 | orchestrator | 2025-04-10 01:11:32 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:32.199893 | orchestrator | 2025-04-10 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:35.242939 | orchestrator | 2025-04-10 01:11:35 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:35.246558 | orchestrator | 2025-04-10 01:11:35 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:35.249361 | orchestrator | 2025-04-10 01:11:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:35.253577 | orchestrator | 2025-04-10 01:11:35 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:35.256596 | orchestrator | 2025-04-10 01:11:35 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:38.328642 | orchestrator | 2025-04-10 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:38.328788 | orchestrator | 2025-04-10 01:11:38 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:38.331230 | orchestrator | 2025-04-10 01:11:38 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:38.331283 | orchestrator | 2025-04-10 01:11:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:38.334713 | orchestrator | 2025-04-10 01:11:38 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:38.336543 | orchestrator | 2025-04-10 01:11:38 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:38.337233 | orchestrator | 2025-04-10 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:41.393715 | orchestrator | 2025-04-10 01:11:41 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:41.394626 | orchestrator | 2025-04-10 01:11:41 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:41.396243 | orchestrator | 2025-04-10 01:11:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:41.398521 | orchestrator | 2025-04-10 01:11:41 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:41.399864 | orchestrator | 2025-04-10 01:11:41 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:41.399964 | orchestrator | 2025-04-10 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:44.442367 | orchestrator | 2025-04-10 01:11:44 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:44.447043 | orchestrator | 2025-04-10 01:11:44 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:44.447097 | orchestrator | 2025-04-10 01:11:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:44.447814 | orchestrator | 2025-04-10 01:11:44 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:44.448420 | orchestrator | 2025-04-10 01:11:44 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:47.495944 | orchestrator | 2025-04-10 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:47.496131 | orchestrator | 2025-04-10 01:11:47 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:47.497242 | orchestrator | 2025-04-10 01:11:47 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:47.498493 | orchestrator | 2025-04-10 01:11:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:47.500362 | orchestrator | 2025-04-10 01:11:47 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:47.501316 | orchestrator | 2025-04-10 01:11:47 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:47.501550 | orchestrator | 2025-04-10 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:50.551982 | orchestrator | 2025-04-10 01:11:50 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:50.553200 | orchestrator | 2025-04-10 01:11:50 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:50.555237 | orchestrator | 2025-04-10 01:11:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:50.556970 | orchestrator | 2025-04-10 01:11:50 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:50.558887 | orchestrator | 2025-04-10 01:11:50 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:53.608921 | orchestrator | 2025-04-10 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:53.609105 | orchestrator | 2025-04-10 01:11:53 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:53.610131 | orchestrator | 2025-04-10 01:11:53 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:53.612110 | orchestrator | 2025-04-10 01:11:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:53.613578 | orchestrator | 2025-04-10 01:11:53 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:53.615834 | orchestrator | 2025-04-10 01:11:53 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:53.615939 | orchestrator | 2025-04-10 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:56.660463 | orchestrator | 2025-04-10 01:11:56 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:56.665280 | orchestrator | 2025-04-10 01:11:56 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:59.709711 | orchestrator | 2025-04-10 01:11:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:59.709868 | orchestrator | 2025-04-10 01:11:56 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:59.710251 | orchestrator | 2025-04-10 01:11:56 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:11:59.710303 | orchestrator | 2025-04-10 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:11:59.710351 | orchestrator | 2025-04-10 01:11:59 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:11:59.711462 | orchestrator | 2025-04-10 01:11:59 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state STARTED 2025-04-10 01:11:59.711497 | orchestrator | 2025-04-10 01:11:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:11:59.712895 | orchestrator | 2025-04-10 01:11:59 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:11:59.714515 | orchestrator | 2025-04-10 01:11:59 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:02.769820 | orchestrator | 2025-04-10 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:02.769953 | orchestrator | 2025-04-10 01:12:02 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:02.771009 | orchestrator | 2025-04-10 01:12:02 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:02.773444 | orchestrator | 2025-04-10 01:12:02 | INFO  | Task 6f88773b-2f20-4214-93bd-79c8a9d5e5aa is in state SUCCESS 2025-04-10 01:12:02.774170 | orchestrator | 2025-04-10 01:12:02.775993 | orchestrator | 2025-04-10 01:12:02.776075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:12:02.776094 | orchestrator | 2025-04-10 01:12:02.776109 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:12:02.776123 | orchestrator | Thursday 10 April 2025 01:07:55 +0000 (0:00:01.062) 0:00:01.062 ******** 2025-04-10 01:12:02.776138 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:12:02.776154 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:12:02.776168 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:12:02.776183 | orchestrator | 2025-04-10 01:12:02.776198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:12:02.776253 | orchestrator | Thursday 10 April 2025 01:07:56 +0000 (0:00:00.990) 0:00:02.053 ******** 2025-04-10 01:12:02.776279 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-04-10 01:12:02.776302 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-04-10 01:12:02.776326 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-04-10 01:12:02.776350 | orchestrator | 2025-04-10 01:12:02.776374 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-04-10 01:12:02.776395 | orchestrator | 2025-04-10 01:12:02.776409 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-10 01:12:02.776424 | orchestrator | Thursday 10 April 2025 01:07:56 +0000 (0:00:00.788) 0:00:02.841 ******** 2025-04-10 01:12:02.776438 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:12:02.776454 | orchestrator | 2025-04-10 01:12:02.776468 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-04-10 01:12:02.776482 | orchestrator | Thursday 10 April 2025 01:07:58 +0000 (0:00:01.933) 0:00:04.775 ******** 2025-04-10 01:12:02.776496 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-04-10 01:12:02.776510 | orchestrator | 2025-04-10 01:12:02.776524 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-04-10 01:12:02.776538 | orchestrator | Thursday 10 April 2025 01:08:02 +0000 (0:00:03.920) 0:00:08.696 ******** 2025-04-10 01:12:02.776552 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-04-10 01:12:02.776566 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-04-10 01:12:02.776581 | orchestrator | 2025-04-10 01:12:02.776602 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-04-10 01:12:02.776626 | orchestrator | Thursday 10 April 2025 01:08:09 +0000 (0:00:07.133) 0:00:15.830 ******** 2025-04-10 01:12:02.776651 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:12:02.776677 | orchestrator | 2025-04-10 01:12:02.776697 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-04-10 01:12:02.776713 | orchestrator | Thursday 10 April 2025 01:08:13 +0000 (0:00:04.086) 0:00:19.916 ******** 2025-04-10 01:12:02.776729 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:12:02.776745 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-04-10 01:12:02.776761 | orchestrator | 2025-04-10 01:12:02.776777 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-04-10 01:12:02.776793 | orchestrator | Thursday 10 April 2025 01:08:18 +0000 (0:00:04.409) 0:00:24.325 ******** 2025-04-10 01:12:02.776809 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:12:02.776824 | orchestrator | 2025-04-10 01:12:02.776840 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-04-10 01:12:02.776856 | orchestrator | Thursday 10 April 2025 01:08:22 +0000 (0:00:03.693) 0:00:28.019 ******** 2025-04-10 01:12:02.776872 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-04-10 01:12:02.776888 | orchestrator | 2025-04-10 01:12:02.776903 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-04-10 01:12:02.776919 | orchestrator | Thursday 10 April 2025 01:08:26 +0000 (0:00:04.365) 0:00:32.384 ******** 2025-04-10 01:12:02.777077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.777119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.777137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.777170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.777188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.777219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.777236 | orchestrator | 2025-04-10 01:12:02.777250 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-10 01:12:02.777264 | orchestrator | Thursday 10 April 2025 01:08:31 +0000 (0:00:05.196) 0:00:37.581 ******** 2025-04-10 01:12:02.777279 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:12:02.777293 | orchestrator | 2025-04-10 01:12:02.777307 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-04-10 01:12:02.777321 | orchestrator | Thursday 10 April 2025 01:08:32 +0000 (0:00:00.687) 0:00:38.268 ******** 2025-04-10 01:12:02.777335 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:12:02.777355 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.777380 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:12:02.777405 | orchestrator | 2025-04-10 01:12:02.777431 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-04-10 01:12:02.777455 | orchestrator | Thursday 10 April 2025 01:08:46 +0000 (0:00:14.288) 0:00:52.557 ******** 2025-04-10 01:12:02.777479 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:12:02.777503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:12:02.777522 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:12:02.777536 | orchestrator | 2025-04-10 01:12:02.777565 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-04-10 01:12:02.777579 | orchestrator | Thursday 10 April 2025 01:08:48 +0000 (0:00:02.186) 0:00:54.744 ******** 2025-04-10 01:12:02.777594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:12:02.777615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:12:02.777629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-10 01:12:02.777644 | orchestrator | 2025-04-10 01:12:02.777658 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-04-10 01:12:02.777672 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:01.579) 0:00:56.323 ******** 2025-04-10 01:12:02.777688 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:12:02.777710 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:12:02.777726 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:12:02.777749 | orchestrator | 2025-04-10 01:12:02.777773 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-04-10 01:12:02.777796 | orchestrator | Thursday 10 April 2025 01:08:50 +0000 (0:00:00.646) 0:00:56.970 ******** 2025-04-10 01:12:02.777823 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.777848 | orchestrator | 2025-04-10 01:12:02.777871 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-04-10 01:12:02.777892 | orchestrator | Thursday 10 April 2025 01:08:51 +0000 (0:00:00.270) 0:00:57.240 ******** 2025-04-10 01:12:02.777907 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.777930 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.778128 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.778155 | orchestrator | 2025-04-10 01:12:02.778170 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-10 01:12:02.778186 | orchestrator | Thursday 10 April 2025 01:08:51 +0000 (0:00:00.322) 0:00:57.563 ******** 2025-04-10 01:12:02.778259 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:12:02.778278 | orchestrator | 2025-04-10 01:12:02.778293 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-04-10 01:12:02.778308 | orchestrator | Thursday 10 April 2025 01:08:52 +0000 (0:00:01.101) 0:00:58.664 ******** 2025-04-10 01:12:02.778338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.778357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.778395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.778412 | orchestrator | 2025-04-10 01:12:02.778426 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-04-10 01:12:02.778441 | orchestrator | Thursday 10 April 2025 01:08:59 +0000 (0:00:06.888) 0:01:05.553 ******** 2025-04-10 01:12:02.778455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 01:12:02.778482 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.778504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 01:12:02.778521 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.778536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 01:12:02.778559 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.778573 | orchestrator | 2025-04-10 01:12:02.778587 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-04-10 01:12:02.778601 | orchestrator | Thursday 10 April 2025 01:09:05 +0000 (0:00:06.311) 0:01:11.865 ******** 2025-04-10 01:12:02.778623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 01:12:02.778639 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.778654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 01:12:02.778676 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.778690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-10 01:12:02.778705 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.778720 | orchestrator | 2025-04-10 01:12:02.778734 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-04-10 01:12:02.778748 | orchestrator | Thursday 10 April 2025 01:09:13 +0000 (0:00:07.594) 0:01:19.460 ******** 2025-04-10 01:12:02.778762 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.778776 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.778790 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.778804 | orchestrator | 2025-04-10 01:12:02.778824 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-04-10 01:12:02.778839 | orchestrator | Thursday 10 April 2025 01:09:25 +0000 (0:00:11.613) 0:01:31.073 ******** 2025-04-10 01:12:02.778854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.778875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.778907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.778940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.778967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.778992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.779009 | orchestrator | 2025-04-10 01:12:02.779064 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-04-10 01:12:02.779081 | orchestrator | Thursday 10 April 2025 01:09:31 +0000 (0:00:06.910) 0:01:37.984 ******** 2025-04-10 01:12:02.779097 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.779113 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:12:02.779129 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:12:02.779145 | orchestrator | 2025-04-10 01:12:02.779161 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-04-10 01:12:02.779178 | orchestrator | Thursday 10 April 2025 01:09:49 +0000 (0:00:17.238) 0:01:55.222 ******** 2025-04-10 01:12:02.779193 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.779209 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.779225 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.779241 | orchestrator | 2025-04-10 01:12:02.779256 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-04-10 01:12:02.779270 | orchestrator | Thursday 10 April 2025 01:09:59 +0000 (0:00:09.892) 0:02:05.114 ******** 2025-04-10 01:12:02.779284 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.779305 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.779320 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.779334 | orchestrator | 2025-04-10 01:12:02.779348 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-04-10 01:12:02.779362 | orchestrator | Thursday 10 April 2025 01:10:04 +0000 (0:00:05.689) 0:02:10.804 ******** 2025-04-10 01:12:02.779376 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.779390 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.779404 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.779418 | orchestrator | 2025-04-10 01:12:02.779432 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-04-10 01:12:02.779447 | orchestrator | Thursday 10 April 2025 01:10:18 +0000 (0:00:13.574) 0:02:24.379 ******** 2025-04-10 01:12:02.779460 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.779541 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.779557 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.779571 | orchestrator | 2025-04-10 01:12:02.779585 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-04-10 01:12:02.779599 | orchestrator | Thursday 10 April 2025 01:10:31 +0000 (0:00:12.657) 0:02:37.037 ******** 2025-04-10 01:12:02.779613 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.779627 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.779641 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.779655 | orchestrator | 2025-04-10 01:12:02.779674 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-04-10 01:12:02.779688 | orchestrator | Thursday 10 April 2025 01:10:31 +0000 (0:00:00.463) 0:02:37.501 ******** 2025-04-10 01:12:02.779702 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-10 01:12:02.779717 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.779731 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-10 01:12:02.779745 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.779759 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-10 01:12:02.779774 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.779788 | orchestrator | 2025-04-10 01:12:02.779802 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-04-10 01:12:02.779817 | orchestrator | Thursday 10 April 2025 01:10:35 +0000 (0:00:04.415) 0:02:41.916 ******** 2025-04-10 01:12:02.779832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.779855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.779879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.779901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.779923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-10 01:12:02.779940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-10 01:12:02.779962 | orchestrator | 2025-04-10 01:12:02.779977 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-10 01:12:02.779991 | orchestrator | Thursday 10 April 2025 01:10:41 +0000 (0:00:05.095) 0:02:47.011 ******** 2025-04-10 01:12:02.780005 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:12:02.780043 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:12:02.780070 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:12:02.780094 | orchestrator | 2025-04-10 01:12:02.780124 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-04-10 01:12:02.780139 | orchestrator | Thursday 10 April 2025 01:10:41 +0000 (0:00:00.692) 0:02:47.704 ******** 2025-04-10 01:12:02.780154 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.780168 | orchestrator | 2025-04-10 01:12:02.780182 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-04-10 01:12:02.780196 | orchestrator | Thursday 10 April 2025 01:10:44 +0000 (0:00:02.448) 0:02:50.152 ******** 2025-04-10 01:12:02.780210 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.780225 | orchestrator | 2025-04-10 01:12:02.780239 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-04-10 01:12:02.780253 | orchestrator | Thursday 10 April 2025 01:10:46 +0000 (0:00:02.483) 0:02:52.636 ******** 2025-04-10 01:12:02.780267 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.780281 | orchestrator | 2025-04-10 01:12:02.780295 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-04-10 01:12:02.780309 | orchestrator | Thursday 10 April 2025 01:10:48 +0000 (0:00:02.142) 0:02:54.779 ******** 2025-04-10 01:12:02.780323 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.780337 | orchestrator | 2025-04-10 01:12:02.780351 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-04-10 01:12:02.780365 | orchestrator | Thursday 10 April 2025 01:11:16 +0000 (0:00:28.069) 0:03:22.848 ******** 2025-04-10 01:12:02.780379 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.780393 | orchestrator | 2025-04-10 01:12:02.780407 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-10 01:12:02.780421 | orchestrator | Thursday 10 April 2025 01:11:18 +0000 (0:00:02.101) 0:03:24.949 ******** 2025-04-10 01:12:02.780435 | orchestrator | 2025-04-10 01:12:02.780449 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-10 01:12:02.780463 | orchestrator | Thursday 10 April 2025 01:11:19 +0000 (0:00:00.073) 0:03:25.023 ******** 2025-04-10 01:12:02.780477 | orchestrator | 2025-04-10 01:12:02.780491 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-10 01:12:02.780505 | orchestrator | Thursday 10 April 2025 01:11:19 +0000 (0:00:00.057) 0:03:25.080 ******** 2025-04-10 01:12:02.780519 | orchestrator | 2025-04-10 01:12:02.780533 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-04-10 01:12:02.780547 | orchestrator | Thursday 10 April 2025 01:11:19 +0000 (0:00:00.217) 0:03:25.298 ******** 2025-04-10 01:12:02.780561 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:12:02.780575 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:12:02.780590 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:12:02.780613 | orchestrator | 2025-04-10 01:12:02.780636 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:12:02.780659 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-04-10 01:12:02.780685 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-10 01:12:02.780717 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-10 01:12:02.780732 | orchestrator | 2025-04-10 01:12:02.780746 | orchestrator | 2025-04-10 01:12:02.780760 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:12:02.780774 | orchestrator | Thursday 10 April 2025 01:12:00 +0000 (0:00:41.692) 0:04:06.990 ******** 2025-04-10 01:12:02.780788 | orchestrator | =============================================================================== 2025-04-10 01:12:02.780802 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.69s 2025-04-10 01:12:02.780816 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.07s 2025-04-10 01:12:02.780830 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 17.24s 2025-04-10 01:12:02.780844 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 14.29s 2025-04-10 01:12:02.780859 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 13.57s 2025-04-10 01:12:02.780873 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 12.66s 2025-04-10 01:12:02.780887 | orchestrator | glance : Creating TLS backend PEM File --------------------------------- 11.61s 2025-04-10 01:12:02.780901 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 9.89s 2025-04-10 01:12:02.780915 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 7.59s 2025-04-10 01:12:02.780929 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.13s 2025-04-10 01:12:02.780943 | orchestrator | glance : Copying over config.json files for services -------------------- 6.91s 2025-04-10 01:12:02.780957 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.89s 2025-04-10 01:12:02.780971 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.31s 2025-04-10 01:12:02.780985 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.69s 2025-04-10 01:12:02.780999 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.20s 2025-04-10 01:12:02.781013 | orchestrator | glance : Check glance containers ---------------------------------------- 5.10s 2025-04-10 01:12:02.781053 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.42s 2025-04-10 01:12:02.781069 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.41s 2025-04-10 01:12:02.781083 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.37s 2025-04-10 01:12:02.781104 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.09s 2025-04-10 01:12:05.827801 | orchestrator | 2025-04-10 01:12:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:05.827925 | orchestrator | 2025-04-10 01:12:02 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:05.827946 | orchestrator | 2025-04-10 01:12:02 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:05.827962 | orchestrator | 2025-04-10 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:05.827995 | orchestrator | 2025-04-10 01:12:05 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:05.829196 | orchestrator | 2025-04-10 01:12:05 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:05.831166 | orchestrator | 2025-04-10 01:12:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:05.833181 | orchestrator | 2025-04-10 01:12:05 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:05.835238 | orchestrator | 2025-04-10 01:12:05 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:05.835696 | orchestrator | 2025-04-10 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:08.873895 | orchestrator | 2025-04-10 01:12:08 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:08.875479 | orchestrator | 2025-04-10 01:12:08 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:08.877352 | orchestrator | 2025-04-10 01:12:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:08.885350 | orchestrator | 2025-04-10 01:12:08 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:08.888381 | orchestrator | 2025-04-10 01:12:08 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:11.936473 | orchestrator | 2025-04-10 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:11.936565 | orchestrator | 2025-04-10 01:12:11 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:11.938399 | orchestrator | 2025-04-10 01:12:11 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:11.940840 | orchestrator | 2025-04-10 01:12:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:11.943121 | orchestrator | 2025-04-10 01:12:11 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:11.944525 | orchestrator | 2025-04-10 01:12:11 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:11.944869 | orchestrator | 2025-04-10 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:14.989212 | orchestrator | 2025-04-10 01:12:14 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:14.990697 | orchestrator | 2025-04-10 01:12:14 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:14.991883 | orchestrator | 2025-04-10 01:12:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:14.991921 | orchestrator | 2025-04-10 01:12:14 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:14.993247 | orchestrator | 2025-04-10 01:12:14 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:18.040463 | orchestrator | 2025-04-10 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:18.040608 | orchestrator | 2025-04-10 01:12:18 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:18.041863 | orchestrator | 2025-04-10 01:12:18 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:18.043403 | orchestrator | 2025-04-10 01:12:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:18.045206 | orchestrator | 2025-04-10 01:12:18 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:18.046791 | orchestrator | 2025-04-10 01:12:18 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:21.099422 | orchestrator | 2025-04-10 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:21.099537 | orchestrator | 2025-04-10 01:12:21 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:21.104255 | orchestrator | 2025-04-10 01:12:21 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:21.106112 | orchestrator | 2025-04-10 01:12:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:21.108320 | orchestrator | 2025-04-10 01:12:21 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:21.109715 | orchestrator | 2025-04-10 01:12:21 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:21.109998 | orchestrator | 2025-04-10 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:24.168590 | orchestrator | 2025-04-10 01:12:24 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:24.169944 | orchestrator | 2025-04-10 01:12:24 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:24.170133 | orchestrator | 2025-04-10 01:12:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:24.171078 | orchestrator | 2025-04-10 01:12:24 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:24.172124 | orchestrator | 2025-04-10 01:12:24 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:27.220891 | orchestrator | 2025-04-10 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:27.220996 | orchestrator | 2025-04-10 01:12:27 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:27.221197 | orchestrator | 2025-04-10 01:12:27 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:27.224521 | orchestrator | 2025-04-10 01:12:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:27.225513 | orchestrator | 2025-04-10 01:12:27 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:27.228536 | orchestrator | 2025-04-10 01:12:27 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state STARTED 2025-04-10 01:12:30.275469 | orchestrator | 2025-04-10 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:30.275634 | orchestrator | 2025-04-10 01:12:30 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:30.278308 | orchestrator | 2025-04-10 01:12:30 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:30.280367 | orchestrator | 2025-04-10 01:12:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:30.281672 | orchestrator | 2025-04-10 01:12:30 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:30.282593 | orchestrator | 2025-04-10 01:12:30 | INFO  | Task 2637d6b1-5f6d-4aef-9b0b-eb392b73215a is in state SUCCESS 2025-04-10 01:12:33.331924 | orchestrator | 2025-04-10 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:33.332085 | orchestrator | 2025-04-10 01:12:33 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:33.333688 | orchestrator | 2025-04-10 01:12:33 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:33.336536 | orchestrator | 2025-04-10 01:12:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:33.337866 | orchestrator | 2025-04-10 01:12:33 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:36.393952 | orchestrator | 2025-04-10 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:36.394164 | orchestrator | 2025-04-10 01:12:36 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:36.394762 | orchestrator | 2025-04-10 01:12:36 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:36.396297 | orchestrator | 2025-04-10 01:12:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:36.397183 | orchestrator | 2025-04-10 01:12:36 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:39.442486 | orchestrator | 2025-04-10 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:39.442646 | orchestrator | 2025-04-10 01:12:39 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:39.444917 | orchestrator | 2025-04-10 01:12:39 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:39.446234 | orchestrator | 2025-04-10 01:12:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:39.450212 | orchestrator | 2025-04-10 01:12:39 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:42.503112 | orchestrator | 2025-04-10 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:42.503251 | orchestrator | 2025-04-10 01:12:42 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:42.505142 | orchestrator | 2025-04-10 01:12:42 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:42.510599 | orchestrator | 2025-04-10 01:12:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:45.550324 | orchestrator | 2025-04-10 01:12:42 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:45.550456 | orchestrator | 2025-04-10 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:45.550490 | orchestrator | 2025-04-10 01:12:45 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:45.552133 | orchestrator | 2025-04-10 01:12:45 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:45.554480 | orchestrator | 2025-04-10 01:12:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:45.555223 | orchestrator | 2025-04-10 01:12:45 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:48.610280 | orchestrator | 2025-04-10 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:48.610427 | orchestrator | 2025-04-10 01:12:48 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:48.611993 | orchestrator | 2025-04-10 01:12:48 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:48.613115 | orchestrator | 2025-04-10 01:12:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:48.614912 | orchestrator | 2025-04-10 01:12:48 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:51.662536 | orchestrator | 2025-04-10 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:51.662682 | orchestrator | 2025-04-10 01:12:51 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:51.665899 | orchestrator | 2025-04-10 01:12:51 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:51.668888 | orchestrator | 2025-04-10 01:12:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:51.672250 | orchestrator | 2025-04-10 01:12:51 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:54.715328 | orchestrator | 2025-04-10 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:54.715465 | orchestrator | 2025-04-10 01:12:54 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:54.716705 | orchestrator | 2025-04-10 01:12:54 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:54.718879 | orchestrator | 2025-04-10 01:12:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:54.720527 | orchestrator | 2025-04-10 01:12:54 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:57.758349 | orchestrator | 2025-04-10 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:12:57.758489 | orchestrator | 2025-04-10 01:12:57 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:12:57.759759 | orchestrator | 2025-04-10 01:12:57 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:12:57.763261 | orchestrator | 2025-04-10 01:12:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:12:57.765872 | orchestrator | 2025-04-10 01:12:57 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:12:57.765975 | orchestrator | 2025-04-10 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:00.832142 | orchestrator | 2025-04-10 01:13:00 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:00.833436 | orchestrator | 2025-04-10 01:13:00 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:00.835027 | orchestrator | 2025-04-10 01:13:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:00.837955 | orchestrator | 2025-04-10 01:13:00 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:03.882496 | orchestrator | 2025-04-10 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:03.882657 | orchestrator | 2025-04-10 01:13:03 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:03.884227 | orchestrator | 2025-04-10 01:13:03 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:03.886473 | orchestrator | 2025-04-10 01:13:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:03.888319 | orchestrator | 2025-04-10 01:13:03 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:06.948286 | orchestrator | 2025-04-10 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:06.948415 | orchestrator | 2025-04-10 01:13:06 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:06.949894 | orchestrator | 2025-04-10 01:13:06 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:06.951644 | orchestrator | 2025-04-10 01:13:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:06.952819 | orchestrator | 2025-04-10 01:13:06 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:10.016742 | orchestrator | 2025-04-10 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:10.016890 | orchestrator | 2025-04-10 01:13:10 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:10.018650 | orchestrator | 2025-04-10 01:13:10 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:10.020824 | orchestrator | 2025-04-10 01:13:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:10.023950 | orchestrator | 2025-04-10 01:13:10 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:13.071994 | orchestrator | 2025-04-10 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:13.072150 | orchestrator | 2025-04-10 01:13:13 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:13.073142 | orchestrator | 2025-04-10 01:13:13 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:13.074374 | orchestrator | 2025-04-10 01:13:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:13.075662 | orchestrator | 2025-04-10 01:13:13 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:16.124884 | orchestrator | 2025-04-10 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:16.125122 | orchestrator | 2025-04-10 01:13:16 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:16.126640 | orchestrator | 2025-04-10 01:13:16 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:16.128128 | orchestrator | 2025-04-10 01:13:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:16.129247 | orchestrator | 2025-04-10 01:13:16 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:19.174197 | orchestrator | 2025-04-10 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:19.174330 | orchestrator | 2025-04-10 01:13:19 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:19.175634 | orchestrator | 2025-04-10 01:13:19 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:19.178365 | orchestrator | 2025-04-10 01:13:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:19.180024 | orchestrator | 2025-04-10 01:13:19 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:22.228477 | orchestrator | 2025-04-10 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:22.228616 | orchestrator | 2025-04-10 01:13:22 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:22.228915 | orchestrator | 2025-04-10 01:13:22 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:22.228951 | orchestrator | 2025-04-10 01:13:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:22.229642 | orchestrator | 2025-04-10 01:13:22 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:25.272929 | orchestrator | 2025-04-10 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:25.273098 | orchestrator | 2025-04-10 01:13:25 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:25.274512 | orchestrator | 2025-04-10 01:13:25 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:25.275980 | orchestrator | 2025-04-10 01:13:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:25.278533 | orchestrator | 2025-04-10 01:13:25 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:28.323090 | orchestrator | 2025-04-10 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:28.323237 | orchestrator | 2025-04-10 01:13:28 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:28.324920 | orchestrator | 2025-04-10 01:13:28 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:28.326349 | orchestrator | 2025-04-10 01:13:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:28.327782 | orchestrator | 2025-04-10 01:13:28 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:31.380662 | orchestrator | 2025-04-10 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:31.380805 | orchestrator | 2025-04-10 01:13:31 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:31.381000 | orchestrator | 2025-04-10 01:13:31 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:31.381823 | orchestrator | 2025-04-10 01:13:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:31.382627 | orchestrator | 2025-04-10 01:13:31 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state STARTED 2025-04-10 01:13:34.424574 | orchestrator | 2025-04-10 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:34.424727 | orchestrator | 2025-04-10 01:13:34 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:34.426258 | orchestrator | 2025-04-10 01:13:34 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:34.429533 | orchestrator | 2025-04-10 01:13:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:34.430584 | orchestrator | 2025-04-10 01:13:34 | INFO  | Task 37f9a9d4-c94c-4ef1-97b1-84316f842c06 is in state SUCCESS 2025-04-10 01:13:34.430966 | orchestrator | 2025-04-10 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:37.473233 | orchestrator | 2025-04-10 01:13:37 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:37.474156 | orchestrator | 2025-04-10 01:13:37 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:37.475133 | orchestrator | 2025-04-10 01:13:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:40.524909 | orchestrator | 2025-04-10 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:40.525127 | orchestrator | 2025-04-10 01:13:40 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:40.525688 | orchestrator | 2025-04-10 01:13:40 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:40.526570 | orchestrator | 2025-04-10 01:13:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:43.566555 | orchestrator | 2025-04-10 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:43.566692 | orchestrator | 2025-04-10 01:13:43 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:43.568283 | orchestrator | 2025-04-10 01:13:43 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:43.569487 | orchestrator | 2025-04-10 01:13:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:46.623522 | orchestrator | 2025-04-10 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:46.623683 | orchestrator | 2025-04-10 01:13:46 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:46.624709 | orchestrator | 2025-04-10 01:13:46 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:46.628080 | orchestrator | 2025-04-10 01:13:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:49.680408 | orchestrator | 2025-04-10 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:49.680580 | orchestrator | 2025-04-10 01:13:49 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:49.681129 | orchestrator | 2025-04-10 01:13:49 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:49.681937 | orchestrator | 2025-04-10 01:13:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:52.729552 | orchestrator | 2025-04-10 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:52.729694 | orchestrator | 2025-04-10 01:13:52 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:52.733644 | orchestrator | 2025-04-10 01:13:52 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:52.735495 | orchestrator | 2025-04-10 01:13:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:52.735841 | orchestrator | 2025-04-10 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:55.787311 | orchestrator | 2025-04-10 01:13:55 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:55.789161 | orchestrator | 2025-04-10 01:13:55 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:55.792336 | orchestrator | 2025-04-10 01:13:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:55.793233 | orchestrator | 2025-04-10 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:13:58.840185 | orchestrator | 2025-04-10 01:13:58 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:13:58.841568 | orchestrator | 2025-04-10 01:13:58 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:13:58.843293 | orchestrator | 2025-04-10 01:13:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:13:58.843931 | orchestrator | 2025-04-10 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:01.888265 | orchestrator | 2025-04-10 01:14:01 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:14:01.889118 | orchestrator | 2025-04-10 01:14:01 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:01.891763 | orchestrator | 2025-04-10 01:14:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:01.892284 | orchestrator | 2025-04-10 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:04.936038 | orchestrator | 2025-04-10 01:14:04 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state STARTED 2025-04-10 01:14:04.936899 | orchestrator | 2025-04-10 01:14:04 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:04.937944 | orchestrator | 2025-04-10 01:14:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:04.938813 | orchestrator | 2025-04-10 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:07.980468 | orchestrator | 2025-04-10 01:14:07 | INFO  | Task eaff6e38-4f4b-4da5-9201-6cef6099abd5 is in state SUCCESS 2025-04-10 01:14:07.981591 | orchestrator | 2025-04-10 01:14:07.981645 | orchestrator | 2025-04-10 01:14:07.981667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:14:07.981715 | orchestrator | 2025-04-10 01:14:07.981740 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:14:07.981764 | orchestrator | Thursday 10 April 2025 01:11:31 +0000 (0:00:00.348) 0:00:00.348 ******** 2025-04-10 01:14:07.981790 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.981942 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:14:07.982233 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:14:07.982271 | orchestrator | 2025-04-10 01:14:07.982297 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:14:07.982324 | orchestrator | Thursday 10 April 2025 01:11:32 +0000 (0:00:00.492) 0:00:00.841 ******** 2025-04-10 01:14:07.982822 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-04-10 01:14:07.982856 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-04-10 01:14:07.982881 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-04-10 01:14:07.983369 | orchestrator | 2025-04-10 01:14:07.983402 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-04-10 01:14:07.983417 | orchestrator | 2025-04-10 01:14:07.983432 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-10 01:14:07.983447 | orchestrator | Thursday 10 April 2025 01:11:32 +0000 (0:00:00.401) 0:00:01.242 ******** 2025-04-10 01:14:07.983462 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:14:07.983478 | orchestrator | 2025-04-10 01:14:07.983494 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-04-10 01:14:07.983508 | orchestrator | Thursday 10 April 2025 01:11:33 +0000 (0:00:00.850) 0:00:02.093 ******** 2025-04-10 01:14:07.983524 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-04-10 01:14:07.983539 | orchestrator | 2025-04-10 01:14:07.983554 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-04-10 01:14:07.983568 | orchestrator | Thursday 10 April 2025 01:11:37 +0000 (0:00:03.660) 0:00:05.754 ******** 2025-04-10 01:14:07.983583 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-04-10 01:14:07.983598 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-04-10 01:14:07.983612 | orchestrator | 2025-04-10 01:14:07.983627 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-04-10 01:14:07.983641 | orchestrator | Thursday 10 April 2025 01:11:44 +0000 (0:00:07.038) 0:00:12.792 ******** 2025-04-10 01:14:07.983656 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:14:07.983672 | orchestrator | 2025-04-10 01:14:07.983686 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-04-10 01:14:07.983711 | orchestrator | Thursday 10 April 2025 01:11:47 +0000 (0:00:03.762) 0:00:16.555 ******** 2025-04-10 01:14:07.983726 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:14:07.983741 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-10 01:14:07.983755 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-10 01:14:07.983770 | orchestrator | 2025-04-10 01:14:07.983784 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-04-10 01:14:07.983799 | orchestrator | Thursday 10 April 2025 01:11:56 +0000 (0:00:08.204) 0:00:24.759 ******** 2025-04-10 01:14:07.983814 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:14:07.983828 | orchestrator | 2025-04-10 01:14:07.983843 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-04-10 01:14:07.983857 | orchestrator | Thursday 10 April 2025 01:11:59 +0000 (0:00:03.240) 0:00:27.999 ******** 2025-04-10 01:14:07.983872 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-10 01:14:07.983886 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-10 01:14:07.983901 | orchestrator | 2025-04-10 01:14:07.983916 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-04-10 01:14:07.983931 | orchestrator | Thursday 10 April 2025 01:12:07 +0000 (0:00:07.950) 0:00:35.950 ******** 2025-04-10 01:14:07.983945 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-04-10 01:14:07.983960 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-04-10 01:14:07.983989 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-04-10 01:14:07.984004 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-04-10 01:14:07.984023 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-04-10 01:14:07.984038 | orchestrator | 2025-04-10 01:14:07.984132 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-10 01:14:07.984151 | orchestrator | Thursday 10 April 2025 01:12:23 +0000 (0:00:16.165) 0:00:52.116 ******** 2025-04-10 01:14:07.984165 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:14:07.984179 | orchestrator | 2025-04-10 01:14:07.984194 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-04-10 01:14:07.984208 | orchestrator | Thursday 10 April 2025 01:12:24 +0000 (0:00:00.848) 0:00:52.965 ******** 2025-04-10 01:14:07.984273 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-04-10 01:14:07.984294 | orchestrator | 2025-04-10 01:14:07.984309 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:14:07.984330 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-04-10 01:14:07.984347 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:14:07.984361 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:14:07.984376 | orchestrator | 2025-04-10 01:14:07.984390 | orchestrator | 2025-04-10 01:14:07.984402 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:14:07.984415 | orchestrator | Thursday 10 April 2025 01:12:27 +0000 (0:00:03.323) 0:00:56.288 ******** 2025-04-10 01:14:07.984427 | orchestrator | =============================================================================== 2025-04-10 01:14:07.984440 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.17s 2025-04-10 01:14:07.984452 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.20s 2025-04-10 01:14:07.984465 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.95s 2025-04-10 01:14:07.984477 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.04s 2025-04-10 01:14:07.984490 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.76s 2025-04-10 01:14:07.984502 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.66s 2025-04-10 01:14:07.984515 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.32s 2025-04-10 01:14:07.984527 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.24s 2025-04-10 01:14:07.984540 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.85s 2025-04-10 01:14:07.984552 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.85s 2025-04-10 01:14:07.984565 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-04-10 01:14:07.984577 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-04-10 01:14:07.984590 | orchestrator | 2025-04-10 01:14:07.984602 | orchestrator | 2025-04-10 01:14:07.984614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:14:07.984627 | orchestrator | 2025-04-10 01:14:07.984649 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:14:07.984661 | orchestrator | Thursday 10 April 2025 01:11:25 +0000 (0:00:00.290) 0:00:00.290 ******** 2025-04-10 01:14:07.984674 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.984686 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:14:07.984699 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:14:07.984712 | orchestrator | 2025-04-10 01:14:07.984724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:14:07.984736 | orchestrator | Thursday 10 April 2025 01:11:25 +0000 (0:00:00.479) 0:00:00.769 ******** 2025-04-10 01:14:07.984749 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-04-10 01:14:07.984766 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-04-10 01:14:07.984779 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-04-10 01:14:07.984791 | orchestrator | 2025-04-10 01:14:07.984804 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-04-10 01:14:07.984817 | orchestrator | 2025-04-10 01:14:07.984829 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-04-10 01:14:07.984842 | orchestrator | Thursday 10 April 2025 01:11:26 +0000 (0:00:00.869) 0:00:01.639 ******** 2025-04-10 01:14:07.984854 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.984867 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:14:07.984880 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:14:07.984900 | orchestrator | 2025-04-10 01:14:07.984914 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:14:07.984927 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:14:07.984940 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:14:07.984953 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:14:07.984965 | orchestrator | 2025-04-10 01:14:07.984978 | orchestrator | 2025-04-10 01:14:07.984990 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:14:07.985003 | orchestrator | Thursday 10 April 2025 01:13:32 +0000 (0:02:05.395) 0:02:07.034 ******** 2025-04-10 01:14:07.985015 | orchestrator | =============================================================================== 2025-04-10 01:14:07.985028 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 125.40s 2025-04-10 01:14:07.985040 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-04-10 01:14:07.985109 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2025-04-10 01:14:07.985123 | orchestrator | 2025-04-10 01:14:07.985136 | orchestrator | 2025-04-10 01:14:07.985148 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:14:07.985161 | orchestrator | 2025-04-10 01:14:07.985173 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:14:07.985218 | orchestrator | Thursday 10 April 2025 01:12:04 +0000 (0:00:00.321) 0:00:00.321 ******** 2025-04-10 01:14:07.985234 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.985248 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:14:07.985261 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:14:07.985273 | orchestrator | 2025-04-10 01:14:07.985286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:14:07.985298 | orchestrator | Thursday 10 April 2025 01:12:04 +0000 (0:00:00.395) 0:00:00.716 ******** 2025-04-10 01:14:07.985311 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-04-10 01:14:07.985323 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-04-10 01:14:07.985336 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-04-10 01:14:07.985348 | orchestrator | 2025-04-10 01:14:07.985361 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-04-10 01:14:07.985381 | orchestrator | 2025-04-10 01:14:07.985393 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-10 01:14:07.985406 | orchestrator | Thursday 10 April 2025 01:12:05 +0000 (0:00:00.346) 0:00:01.062 ******** 2025-04-10 01:14:07.985419 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:14:07.985430 | orchestrator | 2025-04-10 01:14:07.985440 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-04-10 01:14:07.985450 | orchestrator | Thursday 10 April 2025 01:12:05 +0000 (0:00:00.796) 0:00:01.859 ******** 2025-04-10 01:14:07.985461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985498 | orchestrator | 2025-04-10 01:14:07.985509 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-04-10 01:14:07.985519 | orchestrator | Thursday 10 April 2025 01:12:06 +0000 (0:00:01.003) 0:00:02.863 ******** 2025-04-10 01:14:07.985529 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-04-10 01:14:07.985539 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-04-10 01:14:07.985549 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:14:07.985560 | orchestrator | 2025-04-10 01:14:07.985570 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-10 01:14:07.985580 | orchestrator | Thursday 10 April 2025 01:12:07 +0000 (0:00:00.629) 0:00:03.492 ******** 2025-04-10 01:14:07.985590 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:14:07.985600 | orchestrator | 2025-04-10 01:14:07.985611 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-04-10 01:14:07.985621 | orchestrator | Thursday 10 April 2025 01:12:08 +0000 (0:00:00.709) 0:00:04.202 ******** 2025-04-10 01:14:07.985656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985700 | orchestrator | 2025-04-10 01:14:07.985710 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-04-10 01:14:07.985720 | orchestrator | Thursday 10 April 2025 01:12:09 +0000 (0:00:01.615) 0:00:05.817 ******** 2025-04-10 01:14:07.985731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 01:14:07.985741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 01:14:07.985752 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:07.985763 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:07.985796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 01:14:07.985815 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:07.985826 | orchestrator | 2025-04-10 01:14:07.985836 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-04-10 01:14:07.985846 | orchestrator | Thursday 10 April 2025 01:12:10 +0000 (0:00:00.581) 0:00:06.399 ******** 2025-04-10 01:14:07.985856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 01:14:07.985867 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:07.985878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 01:14:07.985888 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:07.985898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-10 01:14:07.985909 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:07.985919 | orchestrator | 2025-04-10 01:14:07.985930 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-04-10 01:14:07.985944 | orchestrator | Thursday 10 April 2025 01:12:11 +0000 (0:00:00.699) 0:00:07.098 ******** 2025-04-10 01:14:07.985955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.985976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.986009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.986074 | orchestrator | 2025-04-10 01:14:07.986086 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-04-10 01:14:07.986096 | orchestrator | Thursday 10 April 2025 01:12:12 +0000 (0:00:01.367) 0:00:08.466 ******** 2025-04-10 01:14:07.986107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.986118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.986128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.986139 | orchestrator | 2025-04-10 01:14:07.986149 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-04-10 01:14:07.986159 | orchestrator | Thursday 10 April 2025 01:12:14 +0000 (0:00:01.591) 0:00:10.058 ******** 2025-04-10 01:14:07.986175 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:07.986191 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:07.986202 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:07.986212 | orchestrator | 2025-04-10 01:14:07.986222 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-04-10 01:14:07.986232 | orchestrator | Thursday 10 April 2025 01:12:14 +0000 (0:00:00.328) 0:00:10.386 ******** 2025-04-10 01:14:07.986242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-10 01:14:07.986252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-10 01:14:07.986263 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-10 01:14:07.986273 | orchestrator | 2025-04-10 01:14:07.986283 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-04-10 01:14:07.986293 | orchestrator | Thursday 10 April 2025 01:12:15 +0000 (0:00:01.459) 0:00:11.846 ******** 2025-04-10 01:14:07.986303 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-10 01:14:07.986314 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-10 01:14:07.986324 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-10 01:14:07.986334 | orchestrator | 2025-04-10 01:14:07.986370 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-04-10 01:14:07.986382 | orchestrator | Thursday 10 April 2025 01:12:17 +0000 (0:00:01.414) 0:00:13.260 ******** 2025-04-10 01:14:07.986392 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:14:07.986403 | orchestrator | 2025-04-10 01:14:07.986413 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-04-10 01:14:07.986423 | orchestrator | Thursday 10 April 2025 01:12:17 +0000 (0:00:00.438) 0:00:13.699 ******** 2025-04-10 01:14:07.986433 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-04-10 01:14:07.986444 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-04-10 01:14:07.986454 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.986464 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:14:07.986474 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:14:07.986485 | orchestrator | 2025-04-10 01:14:07.986495 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-04-10 01:14:07.986505 | orchestrator | Thursday 10 April 2025 01:12:18 +0000 (0:00:00.909) 0:00:14.608 ******** 2025-04-10 01:14:07.986515 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:07.986525 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:07.986535 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:07.986545 | orchestrator | 2025-04-10 01:14:07.986556 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-04-10 01:14:07.986566 | orchestrator | Thursday 10 April 2025 01:12:19 +0000 (0:00:00.453) 0:00:15.061 ******** 2025-04-10 01:14:07.986576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1081027, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9605453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1081027, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9605453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1081027, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9605453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080988, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9485452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080988, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9485452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080988, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9485452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080972, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9455452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080972, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9455452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080972, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9455452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1081019, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9585454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1081019, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9585454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1081019, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9585454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080952, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9425452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080952, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9425452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080952, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9425452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080977, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9465451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080977, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9465451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080977, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9465451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1081014, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9575453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1081014, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9575453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1081014, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9575453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080949, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9415452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080949, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9415452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080949, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9415452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080901, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.928545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080901, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.928545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080901, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.928545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080958, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.943545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080958, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.943545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.986986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080958, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.943545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080936, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.937545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080936, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.937545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080936, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.937545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1080993, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9565454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1080993, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9565454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1080993, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9565454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1080963, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9445453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1080963, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9445453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1080963, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9445453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1081024, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9585454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1081024, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9585454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1081024, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9585454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080944, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.940545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080944, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.940545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080944, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.940545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080980, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9475453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080980, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9475453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080980, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9475453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080904, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9365451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080904, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9365451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080904, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9365451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080939, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9395452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080939, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9395452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080939, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9395452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080968, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9455452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080968, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9455452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080968, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9455452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1081075, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9815457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1081075, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9815457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1081075, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9815457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1081070, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9735456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1081070, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9735456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1081070, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9735456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1081138, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9935458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1081138, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9935458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1081138, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9935458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1081036, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9615455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1081036, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9615455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1081150, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9975457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1081036, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9615455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1081150, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9975457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1081106, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1081150, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9975457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1081106, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1081112, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9895456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1081106, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1081112, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9895456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1081039, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9625454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1081112, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9895456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1081039, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9625454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1081072, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9745455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1081072, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9745455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1081039, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9625454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1081162, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9975457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1081072, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9745455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1081162, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9975457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1081132, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9915457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1081162, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9975457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1081132, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9915457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1081052, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9655454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1081052, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9655454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1081132, 'dev': 192, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744244091.9915457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1081046, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9635453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1081052, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9655454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1081046, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9635453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1081057, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9665453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1081057, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9665453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1081046, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9635453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1081062, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9735456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1081062, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9735456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1081057, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9665453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.987998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081166, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9985456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.988008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081166, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9985456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.988019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1081062, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9735456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.988035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081166, 'dev': 192, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744244091.9985456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-10 01:14:07.988102 | orchestrator | 2025-04-10 01:14:07.988121 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-04-10 01:14:07.988138 | orchestrator | Thursday 10 April 2025 01:12:54 +0000 (0:00:35.169) 0:00:50.231 ******** 2025-04-10 01:14:07.988162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.988180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.988197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-10 01:14:07.988215 | orchestrator | 2025-04-10 01:14:07.988232 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-04-10 01:14:07.988250 | orchestrator | Thursday 10 April 2025 01:12:55 +0000 (0:00:01.239) 0:00:51.470 ******** 2025-04-10 01:14:07.988267 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:14:07.988280 | orchestrator | 2025-04-10 01:14:07.988294 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-04-10 01:14:07.988310 | orchestrator | Thursday 10 April 2025 01:12:58 +0000 (0:00:02.792) 0:00:54.263 ******** 2025-04-10 01:14:07.988327 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:14:07.988344 | orchestrator | 2025-04-10 01:14:07.988360 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-10 01:14:07.988375 | orchestrator | Thursday 10 April 2025 01:13:00 +0000 (0:00:02.369) 0:00:56.632 ******** 2025-04-10 01:14:07.988391 | orchestrator | 2025-04-10 01:14:07.988407 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-10 01:14:07.988434 | orchestrator | Thursday 10 April 2025 01:13:00 +0000 (0:00:00.061) 0:00:56.694 ******** 2025-04-10 01:14:07.988451 | orchestrator | 2025-04-10 01:14:07.988469 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-10 01:14:07.988486 | orchestrator | Thursday 10 April 2025 01:13:00 +0000 (0:00:00.056) 0:00:56.751 ******** 2025-04-10 01:14:07.988497 | orchestrator | 2025-04-10 01:14:07.988507 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-04-10 01:14:07.988517 | orchestrator | Thursday 10 April 2025 01:13:01 +0000 (0:00:00.209) 0:00:56.960 ******** 2025-04-10 01:14:07.988527 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:07.988537 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:07.988547 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:14:07.988557 | orchestrator | 2025-04-10 01:14:07.988568 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-04-10 01:14:07.988578 | orchestrator | Thursday 10 April 2025 01:13:03 +0000 (0:00:01.960) 0:00:58.920 ******** 2025-04-10 01:14:07.988586 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:07.988595 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:07.988603 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-04-10 01:14:07.988612 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-04-10 01:14:07.988621 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-04-10 01:14:07.988630 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.988638 | orchestrator | 2025-04-10 01:14:07.988647 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-04-10 01:14:07.988656 | orchestrator | Thursday 10 April 2025 01:13:42 +0000 (0:00:39.348) 0:01:38.268 ******** 2025-04-10 01:14:07.988664 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:07.988673 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:14:07.988681 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:14:07.988690 | orchestrator | 2025-04-10 01:14:07.988698 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-04-10 01:14:07.988707 | orchestrator | Thursday 10 April 2025 01:14:01 +0000 (0:00:18.976) 0:01:57.244 ******** 2025-04-10 01:14:07.988715 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:14:07.988724 | orchestrator | 2025-04-10 01:14:07.988733 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-04-10 01:14:07.988746 | orchestrator | Thursday 10 April 2025 01:14:03 +0000 (0:00:02.383) 0:01:59.628 ******** 2025-04-10 01:14:11.034612 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:11.034742 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:14:11.034762 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:14:11.034777 | orchestrator | 2025-04-10 01:14:11.034793 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-04-10 01:14:11.034809 | orchestrator | Thursday 10 April 2025 01:14:04 +0000 (0:00:00.466) 0:02:00.094 ******** 2025-04-10 01:14:11.034825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-04-10 01:14:11.034862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-04-10 01:14:11.034879 | orchestrator | 2025-04-10 01:14:11.034894 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-04-10 01:14:11.034908 | orchestrator | Thursday 10 April 2025 01:14:06 +0000 (0:00:02.591) 0:02:02.685 ******** 2025-04-10 01:14:11.034947 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:14:11.034963 | orchestrator | 2025-04-10 01:14:11.034978 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:14:11.034993 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:14:11.035010 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:14:11.035026 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-10 01:14:11.035041 | orchestrator | 2025-04-10 01:14:11.035118 | orchestrator | 2025-04-10 01:14:11.035132 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:14:11.035146 | orchestrator | Thursday 10 April 2025 01:14:07 +0000 (0:00:00.404) 0:02:03.090 ******** 2025-04-10 01:14:11.035160 | orchestrator | =============================================================================== 2025-04-10 01:14:11.035175 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.35s 2025-04-10 01:14:11.035192 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.17s 2025-04-10 01:14:11.035209 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 18.98s 2025-04-10 01:14:11.035224 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.79s 2025-04-10 01:14:11.035240 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.59s 2025-04-10 01:14:11.035255 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.38s 2025-04-10 01:14:11.035271 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.37s 2025-04-10 01:14:11.035288 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.96s 2025-04-10 01:14:11.035303 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.62s 2025-04-10 01:14:11.035319 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.59s 2025-04-10 01:14:11.035335 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.46s 2025-04-10 01:14:11.035350 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.41s 2025-04-10 01:14:11.035366 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2025-04-10 01:14:11.035381 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.24s 2025-04-10 01:14:11.035397 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.00s 2025-04-10 01:14:11.035419 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.91s 2025-04-10 01:14:11.035435 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.80s 2025-04-10 01:14:11.035451 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2025-04-10 01:14:11.035467 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.70s 2025-04-10 01:14:11.035483 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.63s 2025-04-10 01:14:11.035500 | orchestrator | 2025-04-10 01:14:07 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:11.035516 | orchestrator | 2025-04-10 01:14:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:11.035533 | orchestrator | 2025-04-10 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:11.035565 | orchestrator | 2025-04-10 01:14:11 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:14.077739 | orchestrator | 2025-04-10 01:14:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:14.077875 | orchestrator | 2025-04-10 01:14:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:14.077908 | orchestrator | 2025-04-10 01:14:14 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:14.078722 | orchestrator | 2025-04-10 01:14:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:17.125391 | orchestrator | 2025-04-10 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:17.125504 | orchestrator | 2025-04-10 01:14:17 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:17.126107 | orchestrator | 2025-04-10 01:14:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:20.186254 | orchestrator | 2025-04-10 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:20.186374 | orchestrator | 2025-04-10 01:14:20 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:20.188211 | orchestrator | 2025-04-10 01:14:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:23.231876 | orchestrator | 2025-04-10 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:23.232025 | orchestrator | 2025-04-10 01:14:23 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:23.232501 | orchestrator | 2025-04-10 01:14:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:26.286626 | orchestrator | 2025-04-10 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:26.286757 | orchestrator | 2025-04-10 01:14:26 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:26.288510 | orchestrator | 2025-04-10 01:14:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:29.334942 | orchestrator | 2025-04-10 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:29.335134 | orchestrator | 2025-04-10 01:14:29 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:29.337347 | orchestrator | 2025-04-10 01:14:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:32.377410 | orchestrator | 2025-04-10 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:32.377553 | orchestrator | 2025-04-10 01:14:32 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:32.380301 | orchestrator | 2025-04-10 01:14:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:35.425705 | orchestrator | 2025-04-10 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:35.425851 | orchestrator | 2025-04-10 01:14:35 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:35.427143 | orchestrator | 2025-04-10 01:14:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:38.477869 | orchestrator | 2025-04-10 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:38.478013 | orchestrator | 2025-04-10 01:14:38 | INFO  | Task e2c8a455-02dc-41c3-b6e9-7dc465ddab68 is in state STARTED 2025-04-10 01:14:38.478858 | orchestrator | 2025-04-10 01:14:38 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:38.481096 | orchestrator | 2025-04-10 01:14:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:41.528923 | orchestrator | 2025-04-10 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:41.529101 | orchestrator | 2025-04-10 01:14:41 | INFO  | Task e2c8a455-02dc-41c3-b6e9-7dc465ddab68 is in state STARTED 2025-04-10 01:14:41.534331 | orchestrator | 2025-04-10 01:14:41 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:41.534361 | orchestrator | 2025-04-10 01:14:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:44.576001 | orchestrator | 2025-04-10 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:44.576206 | orchestrator | 2025-04-10 01:14:44 | INFO  | Task e2c8a455-02dc-41c3-b6e9-7dc465ddab68 is in state STARTED 2025-04-10 01:14:44.577451 | orchestrator | 2025-04-10 01:14:44 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:44.581145 | orchestrator | 2025-04-10 01:14:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:47.622682 | orchestrator | 2025-04-10 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:47.622849 | orchestrator | 2025-04-10 01:14:47 | INFO  | Task e2c8a455-02dc-41c3-b6e9-7dc465ddab68 is in state SUCCESS 2025-04-10 01:14:47.623370 | orchestrator | 2025-04-10 01:14:47 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:47.625347 | orchestrator | 2025-04-10 01:14:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:50.675367 | orchestrator | 2025-04-10 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:50.675502 | orchestrator | 2025-04-10 01:14:50 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:50.678907 | orchestrator | 2025-04-10 01:14:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:53.721736 | orchestrator | 2025-04-10 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:53.721884 | orchestrator | 2025-04-10 01:14:53 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:56.760851 | orchestrator | 2025-04-10 01:14:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:56.760971 | orchestrator | 2025-04-10 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:56.761009 | orchestrator | 2025-04-10 01:14:56 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:56.762094 | orchestrator | 2025-04-10 01:14:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:14:59.820501 | orchestrator | 2025-04-10 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:14:59.820643 | orchestrator | 2025-04-10 01:14:59 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:14:59.821412 | orchestrator | 2025-04-10 01:14:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:02.876198 | orchestrator | 2025-04-10 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:02.876339 | orchestrator | 2025-04-10 01:15:02 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:02.877388 | orchestrator | 2025-04-10 01:15:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:05.916269 | orchestrator | 2025-04-10 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:05.916425 | orchestrator | 2025-04-10 01:15:05 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:05.916913 | orchestrator | 2025-04-10 01:15:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:08.975303 | orchestrator | 2025-04-10 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:08.975445 | orchestrator | 2025-04-10 01:15:08 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:08.975756 | orchestrator | 2025-04-10 01:15:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:12.027783 | orchestrator | 2025-04-10 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:12.027938 | orchestrator | 2025-04-10 01:15:12 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:12.029017 | orchestrator | 2025-04-10 01:15:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:15.072596 | orchestrator | 2025-04-10 01:15:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:15.072694 | orchestrator | 2025-04-10 01:15:15 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:15.078709 | orchestrator | 2025-04-10 01:15:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:18.129028 | orchestrator | 2025-04-10 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:18.129135 | orchestrator | 2025-04-10 01:15:18 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:18.130712 | orchestrator | 2025-04-10 01:15:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:18.131125 | orchestrator | 2025-04-10 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:21.208154 | orchestrator | 2025-04-10 01:15:21 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:21.209765 | orchestrator | 2025-04-10 01:15:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:24.258776 | orchestrator | 2025-04-10 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:24.258949 | orchestrator | 2025-04-10 01:15:24 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:24.260424 | orchestrator | 2025-04-10 01:15:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:27.312683 | orchestrator | 2025-04-10 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:27.312829 | orchestrator | 2025-04-10 01:15:27 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:27.313457 | orchestrator | 2025-04-10 01:15:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:30.354389 | orchestrator | 2025-04-10 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:30.354519 | orchestrator | 2025-04-10 01:15:30 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:33.397307 | orchestrator | 2025-04-10 01:15:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:33.397436 | orchestrator | 2025-04-10 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:33.397474 | orchestrator | 2025-04-10 01:15:33 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:33.400523 | orchestrator | 2025-04-10 01:15:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:36.445825 | orchestrator | 2025-04-10 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:36.445923 | orchestrator | 2025-04-10 01:15:36 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:36.447525 | orchestrator | 2025-04-10 01:15:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:39.491997 | orchestrator | 2025-04-10 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:39.492179 | orchestrator | 2025-04-10 01:15:39 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:39.494306 | orchestrator | 2025-04-10 01:15:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:42.544686 | orchestrator | 2025-04-10 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:42.544833 | orchestrator | 2025-04-10 01:15:42 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:42.547244 | orchestrator | 2025-04-10 01:15:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:45.599457 | orchestrator | 2025-04-10 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:45.599602 | orchestrator | 2025-04-10 01:15:45 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:45.601307 | orchestrator | 2025-04-10 01:15:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:48.652468 | orchestrator | 2025-04-10 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:48.652625 | orchestrator | 2025-04-10 01:15:48 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:48.654823 | orchestrator | 2025-04-10 01:15:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:51.695992 | orchestrator | 2025-04-10 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:51.696168 | orchestrator | 2025-04-10 01:15:51 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:51.697362 | orchestrator | 2025-04-10 01:15:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:54.737532 | orchestrator | 2025-04-10 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:54.737681 | orchestrator | 2025-04-10 01:15:54 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:54.737975 | orchestrator | 2025-04-10 01:15:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:15:57.782213 | orchestrator | 2025-04-10 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:15:57.782348 | orchestrator | 2025-04-10 01:15:57 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:15:57.785107 | orchestrator | 2025-04-10 01:15:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:00.832498 | orchestrator | 2025-04-10 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:00.832621 | orchestrator | 2025-04-10 01:16:00 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:00.834216 | orchestrator | 2025-04-10 01:16:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:03.885488 | orchestrator | 2025-04-10 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:03.885633 | orchestrator | 2025-04-10 01:16:03 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:03.885886 | orchestrator | 2025-04-10 01:16:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:06.933957 | orchestrator | 2025-04-10 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:06.934185 | orchestrator | 2025-04-10 01:16:06 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:06.938465 | orchestrator | 2025-04-10 01:16:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:09.986118 | orchestrator | 2025-04-10 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:09.986272 | orchestrator | 2025-04-10 01:16:09 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:09.987354 | orchestrator | 2025-04-10 01:16:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:13.027651 | orchestrator | 2025-04-10 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:13.027803 | orchestrator | 2025-04-10 01:16:13 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:13.027993 | orchestrator | 2025-04-10 01:16:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:16.074478 | orchestrator | 2025-04-10 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:16.074623 | orchestrator | 2025-04-10 01:16:16 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:16.076139 | orchestrator | 2025-04-10 01:16:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:19.123999 | orchestrator | 2025-04-10 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:19.124192 | orchestrator | 2025-04-10 01:16:19 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:19.125384 | orchestrator | 2025-04-10 01:16:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:22.170381 | orchestrator | 2025-04-10 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:22.170522 | orchestrator | 2025-04-10 01:16:22 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:22.172016 | orchestrator | 2025-04-10 01:16:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:25.211142 | orchestrator | 2025-04-10 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:25.211291 | orchestrator | 2025-04-10 01:16:25 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:25.212679 | orchestrator | 2025-04-10 01:16:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:28.259400 | orchestrator | 2025-04-10 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:28.259498 | orchestrator | 2025-04-10 01:16:28 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:31.324441 | orchestrator | 2025-04-10 01:16:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:31.324561 | orchestrator | 2025-04-10 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:31.324598 | orchestrator | 2025-04-10 01:16:31 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:31.324792 | orchestrator | 2025-04-10 01:16:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:31.325974 | orchestrator | 2025-04-10 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:34.371941 | orchestrator | 2025-04-10 01:16:34 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:34.372445 | orchestrator | 2025-04-10 01:16:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:37.426346 | orchestrator | 2025-04-10 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:37.426486 | orchestrator | 2025-04-10 01:16:37 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:37.429241 | orchestrator | 2025-04-10 01:16:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:40.485122 | orchestrator | 2025-04-10 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:40.485264 | orchestrator | 2025-04-10 01:16:40 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:40.487573 | orchestrator | 2025-04-10 01:16:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:43.536382 | orchestrator | 2025-04-10 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:43.536518 | orchestrator | 2025-04-10 01:16:43 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:43.538952 | orchestrator | 2025-04-10 01:16:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:46.589233 | orchestrator | 2025-04-10 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:46.589388 | orchestrator | 2025-04-10 01:16:46 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:46.590609 | orchestrator | 2025-04-10 01:16:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:49.637677 | orchestrator | 2025-04-10 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:49.637817 | orchestrator | 2025-04-10 01:16:49 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:52.676717 | orchestrator | 2025-04-10 01:16:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:52.676813 | orchestrator | 2025-04-10 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:52.676839 | orchestrator | 2025-04-10 01:16:52 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:52.678439 | orchestrator | 2025-04-10 01:16:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:52.678629 | orchestrator | 2025-04-10 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:55.722071 | orchestrator | 2025-04-10 01:16:55 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:55.723233 | orchestrator | 2025-04-10 01:16:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:16:58.767913 | orchestrator | 2025-04-10 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:16:58.768036 | orchestrator | 2025-04-10 01:16:58 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:16:58.768572 | orchestrator | 2025-04-10 01:16:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:01.825191 | orchestrator | 2025-04-10 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:01.825325 | orchestrator | 2025-04-10 01:17:01 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:01.827947 | orchestrator | 2025-04-10 01:17:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:04.874494 | orchestrator | 2025-04-10 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:04.874636 | orchestrator | 2025-04-10 01:17:04 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:04.875824 | orchestrator | 2025-04-10 01:17:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:07.918622 | orchestrator | 2025-04-10 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:07.918784 | orchestrator | 2025-04-10 01:17:07 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:10.965157 | orchestrator | 2025-04-10 01:17:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:10.965345 | orchestrator | 2025-04-10 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:10.965372 | orchestrator | 2025-04-10 01:17:10 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:14.010997 | orchestrator | 2025-04-10 01:17:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:14.011102 | orchestrator | 2025-04-10 01:17:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:14.011130 | orchestrator | 2025-04-10 01:17:14 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:14.017830 | orchestrator | 2025-04-10 01:17:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:17.076590 | orchestrator | 2025-04-10 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:17.076777 | orchestrator | 2025-04-10 01:17:17 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:17.076936 | orchestrator | 2025-04-10 01:17:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:17.076953 | orchestrator | 2025-04-10 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:20.102569 | orchestrator | 2025-04-10 01:17:20 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:23.134341 | orchestrator | 2025-04-10 01:17:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:23.134467 | orchestrator | 2025-04-10 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:23.134505 | orchestrator | 2025-04-10 01:17:23 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:26.182069 | orchestrator | 2025-04-10 01:17:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:26.182247 | orchestrator | 2025-04-10 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:26.182286 | orchestrator | 2025-04-10 01:17:26 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:26.184517 | orchestrator | 2025-04-10 01:17:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:29.239445 | orchestrator | 2025-04-10 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:29.239581 | orchestrator | 2025-04-10 01:17:29 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:29.240663 | orchestrator | 2025-04-10 01:17:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:32.292263 | orchestrator | 2025-04-10 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:32.292356 | orchestrator | 2025-04-10 01:17:32 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:32.292621 | orchestrator | 2025-04-10 01:17:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:35.342587 | orchestrator | 2025-04-10 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:35.342728 | orchestrator | 2025-04-10 01:17:35 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:35.343139 | orchestrator | 2025-04-10 01:17:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:35.343277 | orchestrator | 2025-04-10 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:38.394508 | orchestrator | 2025-04-10 01:17:38 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:38.396235 | orchestrator | 2025-04-10 01:17:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:41.448906 | orchestrator | 2025-04-10 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:41.449083 | orchestrator | 2025-04-10 01:17:41 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:41.450378 | orchestrator | 2025-04-10 01:17:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:44.493574 | orchestrator | 2025-04-10 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:44.493715 | orchestrator | 2025-04-10 01:17:44 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:44.495226 | orchestrator | 2025-04-10 01:17:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:47.549806 | orchestrator | 2025-04-10 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:47.549953 | orchestrator | 2025-04-10 01:17:47 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:47.550373 | orchestrator | 2025-04-10 01:17:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:47.550489 | orchestrator | 2025-04-10 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:50.607579 | orchestrator | 2025-04-10 01:17:50 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:50.607853 | orchestrator | 2025-04-10 01:17:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:53.656906 | orchestrator | 2025-04-10 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:53.657051 | orchestrator | 2025-04-10 01:17:53 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state STARTED 2025-04-10 01:17:53.659464 | orchestrator | 2025-04-10 01:17:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:56.709986 | orchestrator | 2025-04-10 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:56.710347 | orchestrator | 2025-04-10 01:17:56 | INFO  | Task 82c90bf7-656e-4249-a77e-6c97fea07bd9 is in state SUCCESS 2025-04-10 01:17:56.711500 | orchestrator | 2025-04-10 01:17:56.711540 | orchestrator | None 2025-04-10 01:17:56.711556 | orchestrator | 2025-04-10 01:17:56.711570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-10 01:17:56.711585 | orchestrator | 2025-04-10 01:17:56.711599 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-04-10 01:17:56.711660 | orchestrator | Thursday 10 April 2025 01:09:11 +0000 (0:00:00.304) 0:00:00.304 ******** 2025-04-10 01:17:56.711676 | orchestrator | changed: [testbed-manager] 2025-04-10 01:17:56.711752 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.711805 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.711823 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.711838 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.712247 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.712265 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.712279 | orchestrator | 2025-04-10 01:17:56.712294 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-10 01:17:56.712335 | orchestrator | Thursday 10 April 2025 01:09:14 +0000 (0:00:02.440) 0:00:02.744 ******** 2025-04-10 01:17:56.712350 | orchestrator | changed: [testbed-manager] 2025-04-10 01:17:56.712364 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.712378 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.712392 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.712406 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.712420 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.712433 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.712462 | orchestrator | 2025-04-10 01:17:56.712477 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-10 01:17:56.712491 | orchestrator | Thursday 10 April 2025 01:09:18 +0000 (0:00:04.183) 0:00:06.928 ******** 2025-04-10 01:17:56.712505 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-04-10 01:17:56.712520 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-04-10 01:17:56.712534 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-04-10 01:17:56.712548 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-04-10 01:17:56.712561 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-04-10 01:17:56.712575 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-04-10 01:17:56.712589 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-04-10 01:17:56.712651 | orchestrator | 2025-04-10 01:17:56.712668 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-04-10 01:17:56.713013 | orchestrator | 2025-04-10 01:17:56.713035 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-10 01:17:56.713049 | orchestrator | Thursday 10 April 2025 01:09:21 +0000 (0:00:02.945) 0:00:09.874 ******** 2025-04-10 01:17:56.713063 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.713077 | orchestrator | 2025-04-10 01:17:56.713091 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-04-10 01:17:56.713141 | orchestrator | Thursday 10 April 2025 01:09:23 +0000 (0:00:01.866) 0:00:11.740 ******** 2025-04-10 01:17:56.713156 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-04-10 01:17:56.713171 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-04-10 01:17:56.713184 | orchestrator | 2025-04-10 01:17:56.713198 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-04-10 01:17:56.713212 | orchestrator | Thursday 10 April 2025 01:09:28 +0000 (0:00:05.193) 0:00:16.933 ******** 2025-04-10 01:17:56.713235 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:17:56.713250 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-10 01:17:56.713264 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.713278 | orchestrator | 2025-04-10 01:17:56.713292 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-10 01:17:56.713306 | orchestrator | Thursday 10 April 2025 01:09:33 +0000 (0:00:05.254) 0:00:22.188 ******** 2025-04-10 01:17:56.713596 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.713614 | orchestrator | 2025-04-10 01:17:56.713628 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-04-10 01:17:56.713643 | orchestrator | Thursday 10 April 2025 01:09:34 +0000 (0:00:00.588) 0:00:22.776 ******** 2025-04-10 01:17:56.713656 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.713671 | orchestrator | 2025-04-10 01:17:56.713685 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-04-10 01:17:56.713699 | orchestrator | Thursday 10 April 2025 01:09:35 +0000 (0:00:01.578) 0:00:24.355 ******** 2025-04-10 01:17:56.713713 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.713727 | orchestrator | 2025-04-10 01:17:56.713740 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-10 01:17:56.713754 | orchestrator | Thursday 10 April 2025 01:09:43 +0000 (0:00:07.446) 0:00:31.801 ******** 2025-04-10 01:17:56.714332 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.714361 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.714376 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.714390 | orchestrator | 2025-04-10 01:17:56.714404 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-10 01:17:56.714419 | orchestrator | Thursday 10 April 2025 01:09:44 +0000 (0:00:01.181) 0:00:32.982 ******** 2025-04-10 01:17:56.714433 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.714447 | orchestrator | 2025-04-10 01:17:56.714462 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-04-10 01:17:56.714476 | orchestrator | Thursday 10 April 2025 01:10:14 +0000 (0:00:30.342) 0:01:03.325 ******** 2025-04-10 01:17:56.714490 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.714504 | orchestrator | 2025-04-10 01:17:56.714516 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-10 01:17:56.714723 | orchestrator | Thursday 10 April 2025 01:10:31 +0000 (0:00:16.522) 0:01:19.847 ******** 2025-04-10 01:17:56.714743 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.714756 | orchestrator | 2025-04-10 01:17:56.714768 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-10 01:17:56.714781 | orchestrator | Thursday 10 April 2025 01:10:41 +0000 (0:00:10.638) 0:01:30.486 ******** 2025-04-10 01:17:56.714870 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.714890 | orchestrator | 2025-04-10 01:17:56.714903 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-04-10 01:17:56.714952 | orchestrator | Thursday 10 April 2025 01:10:43 +0000 (0:00:01.415) 0:01:31.901 ******** 2025-04-10 01:17:56.714966 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.715027 | orchestrator | 2025-04-10 01:17:56.715041 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-10 01:17:56.715053 | orchestrator | Thursday 10 April 2025 01:10:43 +0000 (0:00:00.812) 0:01:32.714 ******** 2025-04-10 01:17:56.715066 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.715181 | orchestrator | 2025-04-10 01:17:56.715197 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-10 01:17:56.715210 | orchestrator | Thursday 10 April 2025 01:10:45 +0000 (0:00:01.271) 0:01:33.986 ******** 2025-04-10 01:17:56.715222 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.715235 | orchestrator | 2025-04-10 01:17:56.715248 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-10 01:17:56.715260 | orchestrator | Thursday 10 April 2025 01:11:01 +0000 (0:00:15.896) 0:01:49.883 ******** 2025-04-10 01:17:56.715272 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.715285 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.715297 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.715310 | orchestrator | 2025-04-10 01:17:56.715322 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-04-10 01:17:56.715334 | orchestrator | 2025-04-10 01:17:56.715347 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-10 01:17:56.715359 | orchestrator | Thursday 10 April 2025 01:11:01 +0000 (0:00:00.313) 0:01:50.197 ******** 2025-04-10 01:17:56.715372 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.715384 | orchestrator | 2025-04-10 01:17:56.715397 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-04-10 01:17:56.715409 | orchestrator | Thursday 10 April 2025 01:11:02 +0000 (0:00:00.853) 0:01:51.050 ******** 2025-04-10 01:17:56.715422 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.715434 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.715447 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.715459 | orchestrator | 2025-04-10 01:17:56.715471 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-04-10 01:17:56.715484 | orchestrator | Thursday 10 April 2025 01:11:04 +0000 (0:00:02.465) 0:01:53.516 ******** 2025-04-10 01:17:56.715510 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.715525 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.715539 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.715553 | orchestrator | 2025-04-10 01:17:56.715567 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-10 01:17:56.715581 | orchestrator | Thursday 10 April 2025 01:11:07 +0000 (0:00:02.397) 0:01:55.913 ******** 2025-04-10 01:17:56.715595 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.715608 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.715622 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.715636 | orchestrator | 2025-04-10 01:17:56.715650 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-10 01:17:56.715664 | orchestrator | Thursday 10 April 2025 01:11:07 +0000 (0:00:00.532) 0:01:56.445 ******** 2025-04-10 01:17:56.715678 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-10 01:17:56.715692 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.715706 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-10 01:17:56.715720 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.716177 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-10 01:17:56.716193 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-04-10 01:17:56.716207 | orchestrator | 2025-04-10 01:17:56.716220 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-10 01:17:56.716233 | orchestrator | Thursday 10 April 2025 01:11:16 +0000 (0:00:08.685) 0:02:05.130 ******** 2025-04-10 01:17:56.716246 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.716259 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.716272 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.716285 | orchestrator | 2025-04-10 01:17:56.716299 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-10 01:17:56.716312 | orchestrator | Thursday 10 April 2025 01:11:17 +0000 (0:00:00.596) 0:02:05.727 ******** 2025-04-10 01:17:56.716325 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-10 01:17:56.716338 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.716351 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-10 01:17:56.716365 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.716378 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-10 01:17:56.716391 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.716404 | orchestrator | 2025-04-10 01:17:56.716417 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-10 01:17:56.716430 | orchestrator | Thursday 10 April 2025 01:11:18 +0000 (0:00:01.174) 0:02:06.902 ******** 2025-04-10 01:17:56.716443 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.716456 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.716469 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.716482 | orchestrator | 2025-04-10 01:17:56.716495 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-04-10 01:17:56.716564 | orchestrator | Thursday 10 April 2025 01:11:18 +0000 (0:00:00.489) 0:02:07.392 ******** 2025-04-10 01:17:56.716577 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.716643 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.716660 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.716672 | orchestrator | 2025-04-10 01:17:56.716685 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-04-10 01:17:56.716705 | orchestrator | Thursday 10 April 2025 01:11:19 +0000 (0:00:01.139) 0:02:08.531 ******** 2025-04-10 01:17:56.716928 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717011 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717029 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.717041 | orchestrator | 2025-04-10 01:17:56.717054 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-04-10 01:17:56.717066 | orchestrator | Thursday 10 April 2025 01:11:23 +0000 (0:00:03.679) 0:02:12.211 ******** 2025-04-10 01:17:56.717089 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717164 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717177 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.717190 | orchestrator | 2025-04-10 01:17:56.717203 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-10 01:17:56.717215 | orchestrator | Thursday 10 April 2025 01:11:43 +0000 (0:00:20.194) 0:02:32.405 ******** 2025-04-10 01:17:56.717228 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717240 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717252 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.717265 | orchestrator | 2025-04-10 01:17:56.717277 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-10 01:17:56.717290 | orchestrator | Thursday 10 April 2025 01:11:55 +0000 (0:00:11.392) 0:02:43.797 ******** 2025-04-10 01:17:56.717302 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.717314 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717334 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717347 | orchestrator | 2025-04-10 01:17:56.717360 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-04-10 01:17:56.717370 | orchestrator | Thursday 10 April 2025 01:11:56 +0000 (0:00:01.240) 0:02:45.038 ******** 2025-04-10 01:17:56.717380 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717390 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717400 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.717410 | orchestrator | 2025-04-10 01:17:56.717420 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-04-10 01:17:56.717430 | orchestrator | Thursday 10 April 2025 01:12:07 +0000 (0:00:10.851) 0:02:55.889 ******** 2025-04-10 01:17:56.717441 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.717451 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717461 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717471 | orchestrator | 2025-04-10 01:17:56.717481 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-10 01:17:56.717491 | orchestrator | Thursday 10 April 2025 01:12:08 +0000 (0:00:01.590) 0:02:57.479 ******** 2025-04-10 01:17:56.717501 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.717512 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.717522 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.717532 | orchestrator | 2025-04-10 01:17:56.717542 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-04-10 01:17:56.717552 | orchestrator | 2025-04-10 01:17:56.717562 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-10 01:17:56.717572 | orchestrator | Thursday 10 April 2025 01:12:09 +0000 (0:00:00.528) 0:02:58.008 ******** 2025-04-10 01:17:56.717583 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.717594 | orchestrator | 2025-04-10 01:17:56.717604 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-04-10 01:17:56.717615 | orchestrator | Thursday 10 April 2025 01:12:10 +0000 (0:00:00.856) 0:02:58.864 ******** 2025-04-10 01:17:56.717625 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-04-10 01:17:56.717638 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-04-10 01:17:56.717650 | orchestrator | 2025-04-10 01:17:56.717661 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-04-10 01:17:56.717673 | orchestrator | Thursday 10 April 2025 01:12:13 +0000 (0:00:03.372) 0:03:02.237 ******** 2025-04-10 01:17:56.717684 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-04-10 01:17:56.717698 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-04-10 01:17:56.717709 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-04-10 01:17:56.717731 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-04-10 01:17:56.717743 | orchestrator | 2025-04-10 01:17:56.717754 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-04-10 01:17:56.717766 | orchestrator | Thursday 10 April 2025 01:12:20 +0000 (0:00:06.811) 0:03:09.049 ******** 2025-04-10 01:17:56.717778 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-10 01:17:56.717789 | orchestrator | 2025-04-10 01:17:56.717800 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-04-10 01:17:56.717812 | orchestrator | Thursday 10 April 2025 01:12:23 +0000 (0:00:03.453) 0:03:12.502 ******** 2025-04-10 01:17:56.717823 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-10 01:17:56.717835 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-04-10 01:17:56.717846 | orchestrator | 2025-04-10 01:17:56.717857 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-04-10 01:17:56.717869 | orchestrator | Thursday 10 April 2025 01:12:28 +0000 (0:00:04.384) 0:03:16.887 ******** 2025-04-10 01:17:56.717880 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-10 01:17:56.717891 | orchestrator | 2025-04-10 01:17:56.717903 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-04-10 01:17:56.717914 | orchestrator | Thursday 10 April 2025 01:12:31 +0000 (0:00:03.392) 0:03:20.279 ******** 2025-04-10 01:17:56.717925 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-04-10 01:17:56.717936 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-04-10 01:17:56.717947 | orchestrator | 2025-04-10 01:17:56.717959 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-10 01:17:56.718072 | orchestrator | Thursday 10 April 2025 01:12:39 +0000 (0:00:08.362) 0:03:28.641 ******** 2025-04-10 01:17:56.718110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.718125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.718145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.718215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.718231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.718243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.718254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.718271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.718282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.718292 | orchestrator | 2025-04-10 01:17:56.718303 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-04-10 01:17:56.718313 | orchestrator | Thursday 10 April 2025 01:12:41 +0000 (0:00:01.699) 0:03:30.341 ******** 2025-04-10 01:17:56.718323 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.718334 | orchestrator | 2025-04-10 01:17:56.718349 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-04-10 01:17:56.718360 | orchestrator | Thursday 10 April 2025 01:12:41 +0000 (0:00:00.174) 0:03:30.515 ******** 2025-04-10 01:17:56.718370 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.718380 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.718391 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.718401 | orchestrator | 2025-04-10 01:17:56.718411 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-04-10 01:17:56.718421 | orchestrator | Thursday 10 April 2025 01:12:42 +0000 (0:00:00.528) 0:03:31.044 ******** 2025-04-10 01:17:56.718432 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-10 01:17:56.718442 | orchestrator | 2025-04-10 01:17:56.718506 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-04-10 01:17:56.718520 | orchestrator | Thursday 10 April 2025 01:12:42 +0000 (0:00:00.415) 0:03:31.459 ******** 2025-04-10 01:17:56.718530 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.718540 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.718550 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.718560 | orchestrator | 2025-04-10 01:17:56.718570 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-10 01:17:56.718581 | orchestrator | Thursday 10 April 2025 01:12:43 +0000 (0:00:00.316) 0:03:31.775 ******** 2025-04-10 01:17:56.718591 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.718602 | orchestrator | 2025-04-10 01:17:56.718612 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-10 01:17:56.718622 | orchestrator | Thursday 10 April 2025 01:12:43 +0000 (0:00:00.882) 0:03:32.658 ******** 2025-04-10 01:17:56.718633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.718656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.718739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.718757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.718768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.718785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.718796 | orchestrator | 2025-04-10 01:17:56.718806 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-10 01:17:56.718817 | orchestrator | Thursday 10 April 2025 01:12:46 +0000 (0:00:02.790) 0:03:35.449 ******** 2025-04-10 01:17:56.718828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.718840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.718912 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.718928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.718946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.718957 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.718967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.718979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.718990 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.719000 | orchestrator | 2025-04-10 01:17:56.719010 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-10 01:17:56.719020 | orchestrator | Thursday 10 April 2025 01:12:47 +0000 (0:00:00.887) 0:03:36.336 ******** 2025-04-10 01:17:56.719083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.719120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719132 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.719143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.719154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719165 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.719217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.719237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719248 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.719258 | orchestrator | 2025-04-10 01:17:56.719287 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-04-10 01:17:56.719298 | orchestrator | Thursday 10 April 2025 01:12:48 +0000 (0:00:01.206) 0:03:37.543 ******** 2025-04-10 01:17:56.719309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.719320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.719384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.719439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.719451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.719473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.719537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719569 | orchestrator | 2025-04-10 01:17:56.719580 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-04-10 01:17:56.719590 | orchestrator | Thursday 10 April 2025 01:12:51 +0000 (0:00:02.792) 0:03:40.336 ******** 2025-04-10 01:17:56.719601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.719624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.719687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.719725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.719738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.719762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.719851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719873 | orchestrator | 2025-04-10 01:17:56.719884 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-04-10 01:17:56.719895 | orchestrator | Thursday 10 April 2025 01:12:58 +0000 (0:00:06.800) 0:03:47.136 ******** 2025-04-10 01:17:56.719906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.719918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.719941 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.719961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.720034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720062 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.720073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-10 01:17:56.720160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720195 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.720205 | orchestrator | 2025-04-10 01:17:56.720216 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-04-10 01:17:56.720227 | orchestrator | Thursday 10 April 2025 01:12:59 +0000 (0:00:00.826) 0:03:47.962 ******** 2025-04-10 01:17:56.720237 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.720247 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.720258 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.720268 | orchestrator | 2025-04-10 01:17:56.720278 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-04-10 01:17:56.720289 | orchestrator | Thursday 10 April 2025 01:13:01 +0000 (0:00:01.791) 0:03:49.754 ******** 2025-04-10 01:17:56.720359 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.720374 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.720385 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.720395 | orchestrator | 2025-04-10 01:17:56.720405 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-04-10 01:17:56.720416 | orchestrator | Thursday 10 April 2025 01:13:01 +0000 (0:00:00.492) 0:03:50.247 ******** 2025-04-10 01:17:56.720441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.720452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.720482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-10 01:17:56.720543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.720555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.720574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.720619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.720628 | orchestrator | 2025-04-10 01:17:56.720637 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-10 01:17:56.720646 | orchestrator | Thursday 10 April 2025 01:13:03 +0000 (0:00:02.133) 0:03:52.381 ******** 2025-04-10 01:17:56.720655 | orchestrator | 2025-04-10 01:17:56.720664 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-10 01:17:56.720673 | orchestrator | Thursday 10 April 2025 01:13:03 +0000 (0:00:00.302) 0:03:52.683 ******** 2025-04-10 01:17:56.720681 | orchestrator | 2025-04-10 01:17:56.720690 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-10 01:17:56.720702 | orchestrator | Thursday 10 April 2025 01:13:04 +0000 (0:00:00.128) 0:03:52.812 ******** 2025-04-10 01:17:56.720711 | orchestrator | 2025-04-10 01:17:56.720764 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-04-10 01:17:56.720776 | orchestrator | Thursday 10 April 2025 01:13:04 +0000 (0:00:00.289) 0:03:53.102 ******** 2025-04-10 01:17:56.720785 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.720864 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.720891 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.720900 | orchestrator | 2025-04-10 01:17:56.720909 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-04-10 01:17:56.720918 | orchestrator | Thursday 10 April 2025 01:13:23 +0000 (0:00:18.852) 0:04:11.954 ******** 2025-04-10 01:17:56.720927 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.720935 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.720944 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.720953 | orchestrator | 2025-04-10 01:17:56.720962 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-04-10 01:17:56.720970 | orchestrator | 2025-04-10 01:17:56.720979 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-10 01:17:56.720988 | orchestrator | Thursday 10 April 2025 01:13:32 +0000 (0:00:09.076) 0:04:21.031 ******** 2025-04-10 01:17:56.720997 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.721007 | orchestrator | 2025-04-10 01:17:56.721015 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-10 01:17:56.721024 | orchestrator | Thursday 10 April 2025 01:13:33 +0000 (0:00:01.481) 0:04:22.512 ******** 2025-04-10 01:17:56.721032 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.721041 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.721049 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.721058 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.721066 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.721075 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.721083 | orchestrator | 2025-04-10 01:17:56.721092 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-04-10 01:17:56.721118 | orchestrator | Thursday 10 April 2025 01:13:34 +0000 (0:00:00.756) 0:04:23.269 ******** 2025-04-10 01:17:56.721134 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.721143 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.721152 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.721160 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:17:56.721169 | orchestrator | 2025-04-10 01:17:56.721178 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-10 01:17:56.721187 | orchestrator | Thursday 10 April 2025 01:13:35 +0000 (0:00:01.338) 0:04:24.608 ******** 2025-04-10 01:17:56.721196 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-04-10 01:17:56.721204 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-04-10 01:17:56.721213 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-04-10 01:17:56.721222 | orchestrator | 2025-04-10 01:17:56.721230 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-10 01:17:56.721239 | orchestrator | Thursday 10 April 2025 01:13:36 +0000 (0:00:00.659) 0:04:25.267 ******** 2025-04-10 01:17:56.721247 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-04-10 01:17:56.721256 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-04-10 01:17:56.721264 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-04-10 01:17:56.721273 | orchestrator | 2025-04-10 01:17:56.721282 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-10 01:17:56.721290 | orchestrator | Thursday 10 April 2025 01:13:37 +0000 (0:00:01.352) 0:04:26.620 ******** 2025-04-10 01:17:56.721299 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-04-10 01:17:56.721308 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.721316 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-04-10 01:17:56.721325 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.721333 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-04-10 01:17:56.721342 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.721355 | orchestrator | 2025-04-10 01:17:56.721364 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-04-10 01:17:56.721372 | orchestrator | Thursday 10 April 2025 01:13:38 +0000 (0:00:00.938) 0:04:27.558 ******** 2025-04-10 01:17:56.721381 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-10 01:17:56.721389 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-10 01:17:56.721398 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.721408 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-10 01:17:56.721417 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-10 01:17:56.721427 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-10 01:17:56.721437 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-10 01:17:56.721446 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-10 01:17:56.721456 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.721466 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-10 01:17:56.721475 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-10 01:17:56.721484 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.721495 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-10 01:17:56.721504 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-10 01:17:56.721517 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-10 01:17:56.721527 | orchestrator | 2025-04-10 01:17:56.721600 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-04-10 01:17:56.721613 | orchestrator | Thursday 10 April 2025 01:13:40 +0000 (0:00:01.939) 0:04:29.498 ******** 2025-04-10 01:17:56.721629 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.721639 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.721648 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.721658 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.721667 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.721675 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.721684 | orchestrator | 2025-04-10 01:17:56.721693 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-04-10 01:17:56.721701 | orchestrator | Thursday 10 April 2025 01:13:41 +0000 (0:00:01.139) 0:04:30.638 ******** 2025-04-10 01:17:56.721710 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.721718 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.721727 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.721735 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.721744 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.721752 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.721761 | orchestrator | 2025-04-10 01:17:56.721769 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-10 01:17:56.721778 | orchestrator | Thursday 10 April 2025 01:13:43 +0000 (0:00:01.904) 0:04:32.543 ******** 2025-04-10 01:17:56.721788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.721799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.721809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.721819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.721882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.721895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.721906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.721927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.721937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.721946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.722091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.722120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.722130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.722275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.722378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.722393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.722413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.722542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.722671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.722720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.722793 | orchestrator | 2025-04-10 01:17:56.722803 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-10 01:17:56.722812 | orchestrator | Thursday 10 April 2025 01:13:47 +0000 (0:00:03.258) 0:04:35.802 ******** 2025-04-10 01:17:56.722821 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-10 01:17:56.722832 | orchestrator | 2025-04-10 01:17:56.722841 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-10 01:17:56.722850 | orchestrator | Thursday 10 April 2025 01:13:48 +0000 (0:00:01.561) 0:04:37.363 ******** 2025-04-10 01:17:56.722909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.722965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.723247 | orchestrator | 2025-04-10 01:17:56.723256 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-10 01:17:56.723265 | orchestrator | Thursday 10 April 2025 01:13:52 +0000 (0:00:04.072) 0:04:41.435 ******** 2025-04-10 01:17:56.723274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.723283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.723340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723352 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.723370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.723385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.723394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723403 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.723412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.723465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.723498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723513 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.723528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.723537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723546 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.723556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.723565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723573 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.723627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.723641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723650 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.723660 | orchestrator | 2025-04-10 01:17:56.723669 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-10 01:17:56.723683 | orchestrator | Thursday 10 April 2025 01:13:54 +0000 (0:00:02.013) 0:04:43.449 ******** 2025-04-10 01:17:56.723711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.723723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.723733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.723775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.723798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723808 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.723817 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.723826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.723835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.723844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723853 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.723881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.723891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723914 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.723923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.723932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723941 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.723950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.723959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.723968 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.723977 | orchestrator | 2025-04-10 01:17:56.723985 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-10 01:17:56.723994 | orchestrator | Thursday 10 April 2025 01:13:57 +0000 (0:00:02.618) 0:04:46.067 ******** 2025-04-10 01:17:56.724002 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.724020 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.724029 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.724038 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-10 01:17:56.724046 | orchestrator | 2025-04-10 01:17:56.724055 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-04-10 01:17:56.724063 | orchestrator | Thursday 10 April 2025 01:13:58 +0000 (0:00:01.252) 0:04:47.320 ******** 2025-04-10 01:17:56.724114 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-10 01:17:56.724126 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-10 01:17:56.724136 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-10 01:17:56.724145 | orchestrator | 2025-04-10 01:17:56.724154 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-04-10 01:17:56.724163 | orchestrator | Thursday 10 April 2025 01:13:59 +0000 (0:00:00.859) 0:04:48.180 ******** 2025-04-10 01:17:56.724172 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-10 01:17:56.724181 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-10 01:17:56.724190 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-10 01:17:56.724199 | orchestrator | 2025-04-10 01:17:56.724208 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-04-10 01:17:56.724217 | orchestrator | Thursday 10 April 2025 01:14:00 +0000 (0:00:00.862) 0:04:49.042 ******** 2025-04-10 01:17:56.724226 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:17:56.724236 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:17:56.724245 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:17:56.724254 | orchestrator | 2025-04-10 01:17:56.724264 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-04-10 01:17:56.724274 | orchestrator | Thursday 10 April 2025 01:14:01 +0000 (0:00:00.827) 0:04:49.869 ******** 2025-04-10 01:17:56.724284 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:17:56.724294 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:17:56.724304 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:17:56.724314 | orchestrator | 2025-04-10 01:17:56.724324 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-04-10 01:17:56.724333 | orchestrator | Thursday 10 April 2025 01:14:01 +0000 (0:00:00.336) 0:04:50.206 ******** 2025-04-10 01:17:56.724342 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-10 01:17:56.724352 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-10 01:17:56.724361 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-10 01:17:56.724370 | orchestrator | 2025-04-10 01:17:56.724379 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-04-10 01:17:56.724388 | orchestrator | Thursday 10 April 2025 01:14:02 +0000 (0:00:01.351) 0:04:51.557 ******** 2025-04-10 01:17:56.724396 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-10 01:17:56.724406 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-10 01:17:56.724415 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-10 01:17:56.724424 | orchestrator | 2025-04-10 01:17:56.724433 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-04-10 01:17:56.724442 | orchestrator | Thursday 10 April 2025 01:14:04 +0000 (0:00:01.393) 0:04:52.950 ******** 2025-04-10 01:17:56.724451 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-10 01:17:56.724460 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-10 01:17:56.724469 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-10 01:17:56.724478 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-04-10 01:17:56.724490 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-04-10 01:17:56.724500 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-04-10 01:17:56.724509 | orchestrator | 2025-04-10 01:17:56.724518 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-04-10 01:17:56.724533 | orchestrator | Thursday 10 April 2025 01:14:09 +0000 (0:00:05.696) 0:04:58.647 ******** 2025-04-10 01:17:56.724542 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.724551 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.724560 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.724569 | orchestrator | 2025-04-10 01:17:56.724578 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-04-10 01:17:56.724587 | orchestrator | Thursday 10 April 2025 01:14:10 +0000 (0:00:00.308) 0:04:58.956 ******** 2025-04-10 01:17:56.724601 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.724610 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.724619 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.724628 | orchestrator | 2025-04-10 01:17:56.724638 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-04-10 01:17:56.724647 | orchestrator | Thursday 10 April 2025 01:14:10 +0000 (0:00:00.479) 0:04:59.435 ******** 2025-04-10 01:17:56.724655 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.724665 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.724674 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.724683 | orchestrator | 2025-04-10 01:17:56.724692 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-04-10 01:17:56.724701 | orchestrator | Thursday 10 April 2025 01:14:12 +0000 (0:00:01.597) 0:05:01.033 ******** 2025-04-10 01:17:56.724711 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-10 01:17:56.724724 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-10 01:17:56.724733 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-10 01:17:56.724742 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-10 01:17:56.724752 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-10 01:17:56.724761 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-10 01:17:56.724770 | orchestrator | 2025-04-10 01:17:56.724779 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-04-10 01:17:56.724809 | orchestrator | Thursday 10 April 2025 01:14:15 +0000 (0:00:03.599) 0:05:04.632 ******** 2025-04-10 01:17:56.724820 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-10 01:17:56.724829 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-10 01:17:56.724838 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-10 01:17:56.724847 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-10 01:17:56.724856 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.724870 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-10 01:17:56.724880 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.724890 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-10 01:17:56.724900 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.724909 | orchestrator | 2025-04-10 01:17:56.724918 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-04-10 01:17:56.724927 | orchestrator | Thursday 10 April 2025 01:14:19 +0000 (0:00:03.515) 0:05:08.148 ******** 2025-04-10 01:17:56.724936 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.724945 | orchestrator | 2025-04-10 01:17:56.724954 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-04-10 01:17:56.724963 | orchestrator | Thursday 10 April 2025 01:14:19 +0000 (0:00:00.133) 0:05:08.282 ******** 2025-04-10 01:17:56.724972 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.724981 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.724990 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.724999 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.725008 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.725017 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.725026 | orchestrator | 2025-04-10 01:17:56.725035 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-04-10 01:17:56.725044 | orchestrator | Thursday 10 April 2025 01:14:20 +0000 (0:00:00.994) 0:05:09.276 ******** 2025-04-10 01:17:56.725062 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-10 01:17:56.725071 | orchestrator | 2025-04-10 01:17:56.725080 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-04-10 01:17:56.725089 | orchestrator | Thursday 10 April 2025 01:14:20 +0000 (0:00:00.392) 0:05:09.668 ******** 2025-04-10 01:17:56.725144 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.725153 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.725162 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.725170 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.725179 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.725187 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.725196 | orchestrator | 2025-04-10 01:17:56.725204 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-04-10 01:17:56.725213 | orchestrator | Thursday 10 April 2025 01:14:21 +0000 (0:00:00.925) 0:05:10.594 ******** 2025-04-10 01:17:56.725222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.725241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.725274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.725300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.725316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.725333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.725361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.725889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.725899 | orchestrator | 2025-04-10 01:17:56.725908 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-04-10 01:17:56.725916 | orchestrator | Thursday 10 April 2025 01:14:25 +0000 (0:00:04.061) 0:05:14.656 ******** 2025-04-10 01:17:56.725924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.725932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.725940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.725957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.725999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.726051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.726059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.726107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.726158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.726166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.726196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.726241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.726249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.726258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.726266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.726285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.726313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.726407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.726474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.726492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.726500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.726598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.726619 | orchestrator | 2025-04-10 01:17:56.726627 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-04-10 01:17:56.726635 | orchestrator | Thursday 10 April 2025 01:14:33 +0000 (0:00:07.713) 0:05:22.369 ******** 2025-04-10 01:17:56.726643 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.726651 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.726659 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.726667 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.726675 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.726683 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.726691 | orchestrator | 2025-04-10 01:17:56.726699 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-04-10 01:17:56.726707 | orchestrator | Thursday 10 April 2025 01:14:35 +0000 (0:00:02.150) 0:05:24.520 ******** 2025-04-10 01:17:56.726715 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-10 01:17:56.726723 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-10 01:17:56.726731 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-10 01:17:56.726739 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-10 01:17:56.726765 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-10 01:17:56.726775 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.726783 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-10 01:17:56.726791 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-10 01:17:56.726799 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-10 01:17:56.726807 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.726815 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-10 01:17:56.726822 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.726830 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-10 01:17:56.726838 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-10 01:17:56.726846 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-10 01:17:56.726854 | orchestrator | 2025-04-10 01:17:56.726862 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-04-10 01:17:56.726888 | orchestrator | Thursday 10 April 2025 01:14:42 +0000 (0:00:06.416) 0:05:30.936 ******** 2025-04-10 01:17:56.726897 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.726905 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.726917 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.726925 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.726933 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.726941 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.726949 | orchestrator | 2025-04-10 01:17:56.726957 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-04-10 01:17:56.726965 | orchestrator | Thursday 10 April 2025 01:14:43 +0000 (0:00:01.074) 0:05:32.011 ******** 2025-04-10 01:17:56.726973 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-10 01:17:56.726981 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-10 01:17:56.726989 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-10 01:17:56.726997 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-10 01:17:56.727005 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-10 01:17:56.727013 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-10 01:17:56.727021 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-10 01:17:56.727029 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-10 01:17:56.727037 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-10 01:17:56.727045 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-10 01:17:56.727053 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.727064 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-10 01:17:56.727072 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.727080 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-10 01:17:56.727088 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.727135 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-10 01:17:56.727144 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-10 01:17:56.727152 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-10 01:17:56.727160 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-10 01:17:56.727167 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-10 01:17:56.727175 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-10 01:17:56.727183 | orchestrator | 2025-04-10 01:17:56.727191 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-04-10 01:17:56.727199 | orchestrator | Thursday 10 April 2025 01:14:51 +0000 (0:00:08.436) 0:05:40.448 ******** 2025-04-10 01:17:56.727207 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-10 01:17:56.727215 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-10 01:17:56.727243 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-10 01:17:56.727253 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-10 01:17:56.727268 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-10 01:17:56.727276 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-10 01:17:56.727284 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-10 01:17:56.727291 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-10 01:17:56.727299 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-10 01:17:56.727307 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-10 01:17:56.727315 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-10 01:17:56.727323 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-10 01:17:56.727331 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.727338 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-10 01:17:56.727346 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-10 01:17:56.727367 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.727375 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-10 01:17:56.727383 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.727391 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-10 01:17:56.727399 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-10 01:17:56.727407 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-10 01:17:56.727414 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-10 01:17:56.727422 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-10 01:17:56.727430 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-10 01:17:56.727438 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-10 01:17:56.727446 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-10 01:17:56.727454 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-10 01:17:56.727462 | orchestrator | 2025-04-10 01:17:56.727470 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-04-10 01:17:56.727478 | orchestrator | Thursday 10 April 2025 01:15:02 +0000 (0:00:10.891) 0:05:51.339 ******** 2025-04-10 01:17:56.727486 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.727494 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.727501 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.727509 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.727517 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.727525 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.727533 | orchestrator | 2025-04-10 01:17:56.727541 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-04-10 01:17:56.727549 | orchestrator | Thursday 10 April 2025 01:15:03 +0000 (0:00:00.810) 0:05:52.150 ******** 2025-04-10 01:17:56.727557 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.727564 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.727572 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.727580 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.727588 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.727596 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.727604 | orchestrator | 2025-04-10 01:17:56.727611 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-04-10 01:17:56.727624 | orchestrator | Thursday 10 April 2025 01:15:04 +0000 (0:00:00.977) 0:05:53.128 ******** 2025-04-10 01:17:56.727632 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.727644 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.727652 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.727660 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.727666 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.727673 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.727680 | orchestrator | 2025-04-10 01:17:56.727687 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-04-10 01:17:56.727694 | orchestrator | Thursday 10 April 2025 01:15:07 +0000 (0:00:03.396) 0:05:56.525 ******** 2025-04-10 01:17:56.727726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.727735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.727743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.727750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.727757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.727769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727815 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.727823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.727830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.727837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.727848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.727855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.727879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727909 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.727916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.727927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.727935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.727945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.727952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.727966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.727992 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.727999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.728017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.728025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728075 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.728105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.728114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.728121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728179 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.728186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.728197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.728204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728262 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.728269 | orchestrator | 2025-04-10 01:17:56.728276 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-04-10 01:17:56.728283 | orchestrator | Thursday 10 April 2025 01:15:09 +0000 (0:00:02.047) 0:05:58.572 ******** 2025-04-10 01:17:56.728290 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-10 01:17:56.728297 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-10 01:17:56.728304 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.728310 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-10 01:17:56.728317 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-10 01:17:56.728324 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.728331 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-10 01:17:56.728338 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-10 01:17:56.728345 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.728352 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-10 01:17:56.728359 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-10 01:17:56.728366 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.728373 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-10 01:17:56.728380 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-10 01:17:56.728387 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.728393 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-10 01:17:56.728400 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-10 01:17:56.728407 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.728414 | orchestrator | 2025-04-10 01:17:56.728421 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-04-10 01:17:56.728433 | orchestrator | Thursday 10 April 2025 01:15:10 +0000 (0:00:00.891) 0:05:59.463 ******** 2025-04-10 01:17:56.728444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.728457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.728468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.728476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.728483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-10 01:17:56.728490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-10 01:17:56.728501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-10 01:17:56.728732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-10 01:17:56.728744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-10 01:17:56.728888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-10 01:17:56.728895 | orchestrator | 2025-04-10 01:17:56.728902 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-10 01:17:56.728909 | orchestrator | Thursday 10 April 2025 01:15:14 +0000 (0:00:03.728) 0:06:03.191 ******** 2025-04-10 01:17:56.728916 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.728923 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.728930 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.728937 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.728944 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.728951 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.728958 | orchestrator | 2025-04-10 01:17:56.728965 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-10 01:17:56.728972 | orchestrator | Thursday 10 April 2025 01:15:15 +0000 (0:00:00.829) 0:06:04.021 ******** 2025-04-10 01:17:56.728979 | orchestrator | 2025-04-10 01:17:56.728986 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-10 01:17:56.728993 | orchestrator | Thursday 10 April 2025 01:15:15 +0000 (0:00:00.308) 0:06:04.329 ******** 2025-04-10 01:17:56.729003 | orchestrator | 2025-04-10 01:17:56.729010 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-10 01:17:56.729017 | orchestrator | Thursday 10 April 2025 01:15:15 +0000 (0:00:00.116) 0:06:04.446 ******** 2025-04-10 01:17:56.729024 | orchestrator | 2025-04-10 01:17:56.729030 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-10 01:17:56.729037 | orchestrator | Thursday 10 April 2025 01:15:16 +0000 (0:00:00.319) 0:06:04.766 ******** 2025-04-10 01:17:56.729044 | orchestrator | 2025-04-10 01:17:56.729051 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-10 01:17:56.729058 | orchestrator | Thursday 10 April 2025 01:15:16 +0000 (0:00:00.118) 0:06:04.884 ******** 2025-04-10 01:17:56.729065 | orchestrator | 2025-04-10 01:17:56.729072 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-10 01:17:56.729078 | orchestrator | Thursday 10 April 2025 01:15:16 +0000 (0:00:00.295) 0:06:05.180 ******** 2025-04-10 01:17:56.729085 | orchestrator | 2025-04-10 01:17:56.729105 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-04-10 01:17:56.729112 | orchestrator | Thursday 10 April 2025 01:15:16 +0000 (0:00:00.117) 0:06:05.298 ******** 2025-04-10 01:17:56.729119 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.729126 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.729133 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.729140 | orchestrator | 2025-04-10 01:17:56.729147 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-04-10 01:17:56.729154 | orchestrator | Thursday 10 April 2025 01:15:29 +0000 (0:00:12.666) 0:06:17.964 ******** 2025-04-10 01:17:56.729161 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.729168 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.729174 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.729181 | orchestrator | 2025-04-10 01:17:56.729188 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-04-10 01:17:56.729195 | orchestrator | Thursday 10 April 2025 01:15:40 +0000 (0:00:11.720) 0:06:29.684 ******** 2025-04-10 01:17:56.729205 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.729212 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.729219 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.729226 | orchestrator | 2025-04-10 01:17:56.729233 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-04-10 01:17:56.729240 | orchestrator | Thursday 10 April 2025 01:16:01 +0000 (0:00:20.695) 0:06:50.380 ******** 2025-04-10 01:17:56.729247 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.729254 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.729261 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.729268 | orchestrator | 2025-04-10 01:17:56.729275 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-04-10 01:17:56.729281 | orchestrator | Thursday 10 April 2025 01:16:25 +0000 (0:00:24.182) 0:07:14.562 ******** 2025-04-10 01:17:56.729288 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.729295 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.729302 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.729309 | orchestrator | 2025-04-10 01:17:56.729316 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-04-10 01:17:56.729323 | orchestrator | Thursday 10 April 2025 01:16:27 +0000 (0:00:01.166) 0:07:15.729 ******** 2025-04-10 01:17:56.729330 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.729337 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.729344 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.729350 | orchestrator | 2025-04-10 01:17:56.729358 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-04-10 01:17:56.729364 | orchestrator | Thursday 10 April 2025 01:16:27 +0000 (0:00:00.802) 0:07:16.531 ******** 2025-04-10 01:17:56.729371 | orchestrator | changed: [testbed-node-4] 2025-04-10 01:17:56.729382 | orchestrator | changed: [testbed-node-5] 2025-04-10 01:17:56.729389 | orchestrator | changed: [testbed-node-3] 2025-04-10 01:17:56.729396 | orchestrator | 2025-04-10 01:17:56.729403 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-04-10 01:17:56.729410 | orchestrator | Thursday 10 April 2025 01:16:48 +0000 (0:00:20.590) 0:07:37.122 ******** 2025-04-10 01:17:56.729417 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.729423 | orchestrator | 2025-04-10 01:17:56.729430 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-04-10 01:17:56.729440 | orchestrator | Thursday 10 April 2025 01:16:48 +0000 (0:00:00.127) 0:07:37.249 ******** 2025-04-10 01:17:56.729447 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.729454 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.729461 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.729468 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.729475 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.729482 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-04-10 01:17:56.729489 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:17:56.729496 | orchestrator | 2025-04-10 01:17:56.729503 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-04-10 01:17:56.729510 | orchestrator | Thursday 10 April 2025 01:17:10 +0000 (0:00:21.906) 0:07:59.156 ******** 2025-04-10 01:17:56.729517 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.729526 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.729533 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.729540 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.729547 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.729554 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.729561 | orchestrator | 2025-04-10 01:17:56.729568 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-04-10 01:17:56.729575 | orchestrator | Thursday 10 April 2025 01:17:20 +0000 (0:00:10.432) 0:08:09.588 ******** 2025-04-10 01:17:56.729582 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.729589 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.729596 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.729603 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.729610 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.729616 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-04-10 01:17:56.729623 | orchestrator | 2025-04-10 01:17:56.729630 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-10 01:17:56.729637 | orchestrator | Thursday 10 April 2025 01:17:23 +0000 (0:00:03.113) 0:08:12.702 ******** 2025-04-10 01:17:56.729644 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:17:56.729651 | orchestrator | 2025-04-10 01:17:56.729658 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-10 01:17:56.729665 | orchestrator | Thursday 10 April 2025 01:17:35 +0000 (0:00:11.055) 0:08:23.757 ******** 2025-04-10 01:17:56.729672 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:17:56.729678 | orchestrator | 2025-04-10 01:17:56.729685 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-04-10 01:17:56.729692 | orchestrator | Thursday 10 April 2025 01:17:36 +0000 (0:00:01.271) 0:08:25.029 ******** 2025-04-10 01:17:56.729699 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.729706 | orchestrator | 2025-04-10 01:17:56.729713 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-04-10 01:17:56.729720 | orchestrator | Thursday 10 April 2025 01:17:37 +0000 (0:00:01.528) 0:08:26.557 ******** 2025-04-10 01:17:56.729727 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-10 01:17:56.729734 | orchestrator | 2025-04-10 01:17:56.729741 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-04-10 01:17:56.729751 | orchestrator | Thursday 10 April 2025 01:17:47 +0000 (0:00:09.288) 0:08:35.845 ******** 2025-04-10 01:17:56.729758 | orchestrator | ok: [testbed-node-3] 2025-04-10 01:17:56.729765 | orchestrator | ok: [testbed-node-4] 2025-04-10 01:17:56.729772 | orchestrator | ok: [testbed-node-5] 2025-04-10 01:17:56.729779 | orchestrator | ok: [testbed-node-0] 2025-04-10 01:17:56.729785 | orchestrator | ok: [testbed-node-2] 2025-04-10 01:17:56.729792 | orchestrator | ok: [testbed-node-1] 2025-04-10 01:17:56.729799 | orchestrator | 2025-04-10 01:17:56.729808 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-04-10 01:17:56.729816 | orchestrator | 2025-04-10 01:17:56.729823 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-04-10 01:17:56.729830 | orchestrator | Thursday 10 April 2025 01:17:49 +0000 (0:00:02.296) 0:08:38.142 ******** 2025-04-10 01:17:56.729837 | orchestrator | changed: [testbed-node-0] 2025-04-10 01:17:56.729844 | orchestrator | changed: [testbed-node-1] 2025-04-10 01:17:56.729850 | orchestrator | changed: [testbed-node-2] 2025-04-10 01:17:56.729857 | orchestrator | 2025-04-10 01:17:56.729864 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-04-10 01:17:56.729871 | orchestrator | 2025-04-10 01:17:56.729878 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-04-10 01:17:56.729885 | orchestrator | Thursday 10 April 2025 01:17:50 +0000 (0:00:01.100) 0:08:39.243 ******** 2025-04-10 01:17:56.729892 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.729899 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.729906 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.729913 | orchestrator | 2025-04-10 01:17:56.729919 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-04-10 01:17:56.729926 | orchestrator | 2025-04-10 01:17:56.729933 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-04-10 01:17:56.729940 | orchestrator | Thursday 10 April 2025 01:17:51 +0000 (0:00:00.853) 0:08:40.096 ******** 2025-04-10 01:17:56.729947 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-04-10 01:17:56.729954 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-10 01:17:56.729961 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-10 01:17:56.729968 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-04-10 01:17:56.729975 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-04-10 01:17:56.729982 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-04-10 01:17:56.729989 | orchestrator | skipping: [testbed-node-3] 2025-04-10 01:17:56.729996 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-04-10 01:17:56.730003 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-10 01:17:56.730010 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-10 01:17:56.730049 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-04-10 01:17:56.730057 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-04-10 01:17:56.730064 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-04-10 01:17:56.730071 | orchestrator | skipping: [testbed-node-4] 2025-04-10 01:17:56.730078 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-04-10 01:17:56.730085 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-10 01:17:56.730105 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-10 01:17:56.730113 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-04-10 01:17:56.730120 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-04-10 01:17:56.730127 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-04-10 01:17:56.730134 | orchestrator | skipping: [testbed-node-5] 2025-04-10 01:17:56.730141 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-04-10 01:17:56.730152 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-10 01:17:56.730159 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-10 01:17:56.730166 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-04-10 01:17:56.730173 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-04-10 01:17:56.730180 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-04-10 01:17:56.730186 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.730193 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-04-10 01:17:56.730200 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-10 01:17:56.730207 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-10 01:17:56.730214 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-04-10 01:17:56.730221 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-04-10 01:17:56.730228 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-04-10 01:17:56.730235 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.730242 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-04-10 01:17:56.730249 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-10 01:17:56.730256 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-10 01:17:56.730263 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-04-10 01:17:56.730270 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-04-10 01:17:56.730276 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-04-10 01:17:56.730283 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:56.730290 | orchestrator | 2025-04-10 01:17:56.730297 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-04-10 01:17:56.730304 | orchestrator | 2025-04-10 01:17:56.730311 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-04-10 01:17:56.730318 | orchestrator | Thursday 10 April 2025 01:17:52 +0000 (0:00:01.568) 0:08:41.665 ******** 2025-04-10 01:17:56.730325 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-04-10 01:17:56.730332 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-04-10 01:17:56.730339 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:56.730345 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-04-10 01:17:56.730352 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-04-10 01:17:56.730359 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:56.730370 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-04-10 01:17:59.753558 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-04-10 01:17:59.753687 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:59.753708 | orchestrator | 2025-04-10 01:17:59.753723 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-04-10 01:17:59.753738 | orchestrator | 2025-04-10 01:17:59.753753 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-04-10 01:17:59.753767 | orchestrator | Thursday 10 April 2025 01:17:53 +0000 (0:00:00.691) 0:08:42.356 ******** 2025-04-10 01:17:59.753781 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:59.753795 | orchestrator | 2025-04-10 01:17:59.753810 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-04-10 01:17:59.753824 | orchestrator | 2025-04-10 01:17:59.753872 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-04-10 01:17:59.753887 | orchestrator | Thursday 10 April 2025 01:17:54 +0000 (0:00:00.972) 0:08:43.328 ******** 2025-04-10 01:17:59.753901 | orchestrator | skipping: [testbed-node-0] 2025-04-10 01:17:59.753915 | orchestrator | skipping: [testbed-node-1] 2025-04-10 01:17:59.753929 | orchestrator | skipping: [testbed-node-2] 2025-04-10 01:17:59.753943 | orchestrator | 2025-04-10 01:17:59.753957 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-10 01:17:59.753995 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-10 01:17:59.754012 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-04-10 01:17:59.754081 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-10 01:17:59.754122 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-10 01:17:59.754139 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-10 01:17:59.754155 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-04-10 01:17:59.754170 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-04-10 01:17:59.754186 | orchestrator | 2025-04-10 01:17:59.754201 | orchestrator | 2025-04-10 01:17:59.754217 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-10 01:17:59.754233 | orchestrator | Thursday 10 April 2025 01:17:55 +0000 (0:00:00.580) 0:08:43.909 ******** 2025-04-10 01:17:59.754249 | orchestrator | =============================================================================== 2025-04-10 01:17:59.754265 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.35s 2025-04-10 01:17:59.754280 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.18s 2025-04-10 01:17:59.754296 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.91s 2025-04-10 01:17:59.754312 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.70s 2025-04-10 01:17:59.754328 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.59s 2025-04-10 01:17:59.754343 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.19s 2025-04-10 01:17:59.754358 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.85s 2025-04-10 01:17:59.754374 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.52s 2025-04-10 01:17:59.754389 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.90s 2025-04-10 01:17:59.754404 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.67s 2025-04-10 01:17:59.754419 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.72s 2025-04-10 01:17:59.754435 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.39s 2025-04-10 01:17:59.754450 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.06s 2025-04-10 01:17:59.754464 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.89s 2025-04-10 01:17:59.754478 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.85s 2025-04-10 01:17:59.754493 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.64s 2025-04-10 01:17:59.754506 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.43s 2025-04-10 01:17:59.754520 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.29s 2025-04-10 01:17:59.754534 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.08s 2025-04-10 01:17:59.754548 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.69s 2025-04-10 01:17:59.754562 | orchestrator | 2025-04-10 01:17:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:17:59.754586 | orchestrator | 2025-04-10 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:17:59.754619 | orchestrator | 2025-04-10 01:17:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:02.803259 | orchestrator | 2025-04-10 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:02.803363 | orchestrator | 2025-04-10 01:18:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:05.856443 | orchestrator | 2025-04-10 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:05.856590 | orchestrator | 2025-04-10 01:18:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:08.901720 | orchestrator | 2025-04-10 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:08.901827 | orchestrator | 2025-04-10 01:18:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:11.951511 | orchestrator | 2025-04-10 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:11.951657 | orchestrator | 2025-04-10 01:18:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:14.994446 | orchestrator | 2025-04-10 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:14.994585 | orchestrator | 2025-04-10 01:18:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:18.036732 | orchestrator | 2025-04-10 01:18:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:18.036880 | orchestrator | 2025-04-10 01:18:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:21.080958 | orchestrator | 2025-04-10 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:21.081091 | orchestrator | 2025-04-10 01:18:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:24.133406 | orchestrator | 2025-04-10 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:24.133555 | orchestrator | 2025-04-10 01:18:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:27.175498 | orchestrator | 2025-04-10 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:27.175645 | orchestrator | 2025-04-10 01:18:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:30.217946 | orchestrator | 2025-04-10 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:30.218195 | orchestrator | 2025-04-10 01:18:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:33.268493 | orchestrator | 2025-04-10 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:33.268660 | orchestrator | 2025-04-10 01:18:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:36.331902 | orchestrator | 2025-04-10 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:36.332043 | orchestrator | 2025-04-10 01:18:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:39.381829 | orchestrator | 2025-04-10 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:39.381973 | orchestrator | 2025-04-10 01:18:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:42.439798 | orchestrator | 2025-04-10 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:42.439909 | orchestrator | 2025-04-10 01:18:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:45.500407 | orchestrator | 2025-04-10 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:45.500546 | orchestrator | 2025-04-10 01:18:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:48.545546 | orchestrator | 2025-04-10 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:48.545690 | orchestrator | 2025-04-10 01:18:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:51.592604 | orchestrator | 2025-04-10 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:51.592761 | orchestrator | 2025-04-10 01:18:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:54.631175 | orchestrator | 2025-04-10 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:54.631307 | orchestrator | 2025-04-10 01:18:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:18:57.672726 | orchestrator | 2025-04-10 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:18:57.672880 | orchestrator | 2025-04-10 01:18:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:00.715205 | orchestrator | 2025-04-10 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:00.715350 | orchestrator | 2025-04-10 01:19:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:03.754000 | orchestrator | 2025-04-10 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:03.754213 | orchestrator | 2025-04-10 01:19:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:06.801404 | orchestrator | 2025-04-10 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:06.801501 | orchestrator | 2025-04-10 01:19:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:09.844795 | orchestrator | 2025-04-10 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:09.844965 | orchestrator | 2025-04-10 01:19:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:12.884226 | orchestrator | 2025-04-10 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:12.884368 | orchestrator | 2025-04-10 01:19:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:15.923402 | orchestrator | 2025-04-10 01:19:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:15.923542 | orchestrator | 2025-04-10 01:19:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:18.968386 | orchestrator | 2025-04-10 01:19:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:18.968548 | orchestrator | 2025-04-10 01:19:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:22.015690 | orchestrator | 2025-04-10 01:19:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:22.015829 | orchestrator | 2025-04-10 01:19:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:25.065564 | orchestrator | 2025-04-10 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:25.065693 | orchestrator | 2025-04-10 01:19:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:28.107715 | orchestrator | 2025-04-10 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:28.107852 | orchestrator | 2025-04-10 01:19:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:31.158480 | orchestrator | 2025-04-10 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:31.158630 | orchestrator | 2025-04-10 01:19:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:34.207362 | orchestrator | 2025-04-10 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:34.207467 | orchestrator | 2025-04-10 01:19:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:37.256016 | orchestrator | 2025-04-10 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:37.256214 | orchestrator | 2025-04-10 01:19:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:40.296307 | orchestrator | 2025-04-10 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:40.296422 | orchestrator | 2025-04-10 01:19:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:43.336196 | orchestrator | 2025-04-10 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:43.336345 | orchestrator | 2025-04-10 01:19:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:46.392750 | orchestrator | 2025-04-10 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:46.392842 | orchestrator | 2025-04-10 01:19:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:49.443493 | orchestrator | 2025-04-10 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:49.443643 | orchestrator | 2025-04-10 01:19:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:52.496465 | orchestrator | 2025-04-10 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:52.496603 | orchestrator | 2025-04-10 01:19:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:55.545983 | orchestrator | 2025-04-10 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:55.546274 | orchestrator | 2025-04-10 01:19:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:19:58.592522 | orchestrator | 2025-04-10 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:19:58.592659 | orchestrator | 2025-04-10 01:19:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:01.638672 | orchestrator | 2025-04-10 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:01.638804 | orchestrator | 2025-04-10 01:20:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:04.688358 | orchestrator | 2025-04-10 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:04.688504 | orchestrator | 2025-04-10 01:20:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:07.737533 | orchestrator | 2025-04-10 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:07.737669 | orchestrator | 2025-04-10 01:20:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:10.777468 | orchestrator | 2025-04-10 01:20:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:10.777674 | orchestrator | 2025-04-10 01:20:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:13.825716 | orchestrator | 2025-04-10 01:20:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:13.825883 | orchestrator | 2025-04-10 01:20:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:16.866556 | orchestrator | 2025-04-10 01:20:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:16.866695 | orchestrator | 2025-04-10 01:20:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:19.910122 | orchestrator | 2025-04-10 01:20:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:19.910341 | orchestrator | 2025-04-10 01:20:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:22.967875 | orchestrator | 2025-04-10 01:20:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:22.968021 | orchestrator | 2025-04-10 01:20:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:26.020959 | orchestrator | 2025-04-10 01:20:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:26.021100 | orchestrator | 2025-04-10 01:20:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:29.060222 | orchestrator | 2025-04-10 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:29.060355 | orchestrator | 2025-04-10 01:20:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:32.110430 | orchestrator | 2025-04-10 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:32.111157 | orchestrator | 2025-04-10 01:20:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:35.166358 | orchestrator | 2025-04-10 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:35.166504 | orchestrator | 2025-04-10 01:20:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:38.221850 | orchestrator | 2025-04-10 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:38.221991 | orchestrator | 2025-04-10 01:20:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:41.275978 | orchestrator | 2025-04-10 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:41.276128 | orchestrator | 2025-04-10 01:20:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:44.323091 | orchestrator | 2025-04-10 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:44.323254 | orchestrator | 2025-04-10 01:20:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:47.380044 | orchestrator | 2025-04-10 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:47.380272 | orchestrator | 2025-04-10 01:20:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:50.421997 | orchestrator | 2025-04-10 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:50.422260 | orchestrator | 2025-04-10 01:20:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:53.479338 | orchestrator | 2025-04-10 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:53.479486 | orchestrator | 2025-04-10 01:20:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:56.539720 | orchestrator | 2025-04-10 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:56.539870 | orchestrator | 2025-04-10 01:20:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:20:59.582429 | orchestrator | 2025-04-10 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:20:59.582564 | orchestrator | 2025-04-10 01:20:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:02.626619 | orchestrator | 2025-04-10 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:02.626784 | orchestrator | 2025-04-10 01:21:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:05.679885 | orchestrator | 2025-04-10 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:05.680031 | orchestrator | 2025-04-10 01:21:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:08.717944 | orchestrator | 2025-04-10 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:08.718124 | orchestrator | 2025-04-10 01:21:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:11.776517 | orchestrator | 2025-04-10 01:21:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:11.776666 | orchestrator | 2025-04-10 01:21:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:14.813321 | orchestrator | 2025-04-10 01:21:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:14.813463 | orchestrator | 2025-04-10 01:21:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:17.854571 | orchestrator | 2025-04-10 01:21:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:17.854716 | orchestrator | 2025-04-10 01:21:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:20.897701 | orchestrator | 2025-04-10 01:21:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:20.897843 | orchestrator | 2025-04-10 01:21:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:23.940073 | orchestrator | 2025-04-10 01:21:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:23.940253 | orchestrator | 2025-04-10 01:21:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:26.989717 | orchestrator | 2025-04-10 01:21:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:26.990289 | orchestrator | 2025-04-10 01:21:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:30.039206 | orchestrator | 2025-04-10 01:21:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:30.039350 | orchestrator | 2025-04-10 01:21:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:33.086456 | orchestrator | 2025-04-10 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:33.086619 | orchestrator | 2025-04-10 01:21:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:36.134310 | orchestrator | 2025-04-10 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:36.134431 | orchestrator | 2025-04-10 01:21:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:39.183644 | orchestrator | 2025-04-10 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:39.183790 | orchestrator | 2025-04-10 01:21:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:42.236699 | orchestrator | 2025-04-10 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:42.236856 | orchestrator | 2025-04-10 01:21:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:45.280349 | orchestrator | 2025-04-10 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:45.280491 | orchestrator | 2025-04-10 01:21:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:48.330207 | orchestrator | 2025-04-10 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:48.330365 | orchestrator | 2025-04-10 01:21:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:51.372918 | orchestrator | 2025-04-10 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:51.373056 | orchestrator | 2025-04-10 01:21:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:54.422955 | orchestrator | 2025-04-10 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:54.423130 | orchestrator | 2025-04-10 01:21:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:21:57.466998 | orchestrator | 2025-04-10 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:21:57.467262 | orchestrator | 2025-04-10 01:21:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:00.513056 | orchestrator | 2025-04-10 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:00.513271 | orchestrator | 2025-04-10 01:22:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:03.564237 | orchestrator | 2025-04-10 01:22:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:03.564385 | orchestrator | 2025-04-10 01:22:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:06.604305 | orchestrator | 2025-04-10 01:22:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:06.604448 | orchestrator | 2025-04-10 01:22:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:09.643799 | orchestrator | 2025-04-10 01:22:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:09.643943 | orchestrator | 2025-04-10 01:22:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:12.682616 | orchestrator | 2025-04-10 01:22:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:12.682763 | orchestrator | 2025-04-10 01:22:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:15.735198 | orchestrator | 2025-04-10 01:22:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:15.735341 | orchestrator | 2025-04-10 01:22:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:18.786879 | orchestrator | 2025-04-10 01:22:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:18.787037 | orchestrator | 2025-04-10 01:22:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:21.829945 | orchestrator | 2025-04-10 01:22:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:21.830141 | orchestrator | 2025-04-10 01:22:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:24.872669 | orchestrator | 2025-04-10 01:22:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:24.872813 | orchestrator | 2025-04-10 01:22:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:27.914571 | orchestrator | 2025-04-10 01:22:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:27.914714 | orchestrator | 2025-04-10 01:22:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:30.958323 | orchestrator | 2025-04-10 01:22:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:30.958467 | orchestrator | 2025-04-10 01:22:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:34.009074 | orchestrator | 2025-04-10 01:22:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:34.009312 | orchestrator | 2025-04-10 01:22:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:37.063450 | orchestrator | 2025-04-10 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:37.063599 | orchestrator | 2025-04-10 01:22:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:40.116385 | orchestrator | 2025-04-10 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:40.116528 | orchestrator | 2025-04-10 01:22:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:43.165442 | orchestrator | 2025-04-10 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:43.165613 | orchestrator | 2025-04-10 01:22:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:46.220661 | orchestrator | 2025-04-10 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:46.220804 | orchestrator | 2025-04-10 01:22:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:49.265615 | orchestrator | 2025-04-10 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:49.265761 | orchestrator | 2025-04-10 01:22:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:52.317993 | orchestrator | 2025-04-10 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:52.318306 | orchestrator | 2025-04-10 01:22:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:55.368439 | orchestrator | 2025-04-10 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:55.368599 | orchestrator | 2025-04-10 01:22:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:22:58.405357 | orchestrator | 2025-04-10 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:22:58.405493 | orchestrator | 2025-04-10 01:22:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:01.455452 | orchestrator | 2025-04-10 01:22:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:01.455596 | orchestrator | 2025-04-10 01:23:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:04.498711 | orchestrator | 2025-04-10 01:23:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:04.498851 | orchestrator | 2025-04-10 01:23:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:07.552037 | orchestrator | 2025-04-10 01:23:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:07.552222 | orchestrator | 2025-04-10 01:23:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:10.598632 | orchestrator | 2025-04-10 01:23:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:10.598769 | orchestrator | 2025-04-10 01:23:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:13.655097 | orchestrator | 2025-04-10 01:23:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:13.655316 | orchestrator | 2025-04-10 01:23:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:16.709696 | orchestrator | 2025-04-10 01:23:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:16.709838 | orchestrator | 2025-04-10 01:23:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:19.756580 | orchestrator | 2025-04-10 01:23:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:19.756729 | orchestrator | 2025-04-10 01:23:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:22.804973 | orchestrator | 2025-04-10 01:23:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:22.805220 | orchestrator | 2025-04-10 01:23:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:25.849748 | orchestrator | 2025-04-10 01:23:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:25.849897 | orchestrator | 2025-04-10 01:23:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:28.899778 | orchestrator | 2025-04-10 01:23:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:28.899921 | orchestrator | 2025-04-10 01:23:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:31.950597 | orchestrator | 2025-04-10 01:23:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:31.950778 | orchestrator | 2025-04-10 01:23:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:34.994629 | orchestrator | 2025-04-10 01:23:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:34.994805 | orchestrator | 2025-04-10 01:23:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:38.048474 | orchestrator | 2025-04-10 01:23:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:38.048616 | orchestrator | 2025-04-10 01:23:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:41.096878 | orchestrator | 2025-04-10 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:41.096995 | orchestrator | 2025-04-10 01:23:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:44.149981 | orchestrator | 2025-04-10 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:44.150229 | orchestrator | 2025-04-10 01:23:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:47.196848 | orchestrator | 2025-04-10 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:47.196996 | orchestrator | 2025-04-10 01:23:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:50.245512 | orchestrator | 2025-04-10 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:50.245734 | orchestrator | 2025-04-10 01:23:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:53.286502 | orchestrator | 2025-04-10 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:53.286644 | orchestrator | 2025-04-10 01:23:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:56.334298 | orchestrator | 2025-04-10 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:56.334452 | orchestrator | 2025-04-10 01:23:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:23:59.371950 | orchestrator | 2025-04-10 01:23:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:23:59.372086 | orchestrator | 2025-04-10 01:23:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:02.416568 | orchestrator | 2025-04-10 01:23:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:02.416835 | orchestrator | 2025-04-10 01:24:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:05.467557 | orchestrator | 2025-04-10 01:24:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:05.467712 | orchestrator | 2025-04-10 01:24:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:08.512885 | orchestrator | 2025-04-10 01:24:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:08.513029 | orchestrator | 2025-04-10 01:24:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:11.561776 | orchestrator | 2025-04-10 01:24:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:11.561918 | orchestrator | 2025-04-10 01:24:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:14.610634 | orchestrator | 2025-04-10 01:24:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:14.610815 | orchestrator | 2025-04-10 01:24:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:17.654143 | orchestrator | 2025-04-10 01:24:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:17.654339 | orchestrator | 2025-04-10 01:24:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:20.691308 | orchestrator | 2025-04-10 01:24:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:20.691454 | orchestrator | 2025-04-10 01:24:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:23.738129 | orchestrator | 2025-04-10 01:24:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:23.738322 | orchestrator | 2025-04-10 01:24:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:26.779886 | orchestrator | 2025-04-10 01:24:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:26.780022 | orchestrator | 2025-04-10 01:24:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:29.830395 | orchestrator | 2025-04-10 01:24:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:29.830540 | orchestrator | 2025-04-10 01:24:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:32.872801 | orchestrator | 2025-04-10 01:24:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:32.872946 | orchestrator | 2025-04-10 01:24:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:35.928588 | orchestrator | 2025-04-10 01:24:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:35.928737 | orchestrator | 2025-04-10 01:24:35 | INFO  | Task b6540837-a303-4dda-a62f-fe1960f59cb9 is in state STARTED 2025-04-10 01:24:38.980587 | orchestrator | 2025-04-10 01:24:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:38.980714 | orchestrator | 2025-04-10 01:24:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:38.980746 | orchestrator | 2025-04-10 01:24:38 | INFO  | Task b6540837-a303-4dda-a62f-fe1960f59cb9 is in state STARTED 2025-04-10 01:24:38.981140 | orchestrator | 2025-04-10 01:24:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:42.051624 | orchestrator | 2025-04-10 01:24:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:42.051803 | orchestrator | 2025-04-10 01:24:42 | INFO  | Task b6540837-a303-4dda-a62f-fe1960f59cb9 is in state STARTED 2025-04-10 01:24:42.052506 | orchestrator | 2025-04-10 01:24:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:42.052696 | orchestrator | 2025-04-10 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:45.100267 | orchestrator | 2025-04-10 01:24:45 | INFO  | Task b6540837-a303-4dda-a62f-fe1960f59cb9 is in state STARTED 2025-04-10 01:24:45.101011 | orchestrator | 2025-04-10 01:24:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:45.101289 | orchestrator | 2025-04-10 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:48.148317 | orchestrator | 2025-04-10 01:24:48 | INFO  | Task b6540837-a303-4dda-a62f-fe1960f59cb9 is in state SUCCESS 2025-04-10 01:24:48.148605 | orchestrator | 2025-04-10 01:24:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:51.201244 | orchestrator | 2025-04-10 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:51.201377 | orchestrator | 2025-04-10 01:24:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:54.247017 | orchestrator | 2025-04-10 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:54.247139 | orchestrator | 2025-04-10 01:24:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:24:57.297977 | orchestrator | 2025-04-10 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:24:57.298217 | orchestrator | 2025-04-10 01:24:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:00.350798 | orchestrator | 2025-04-10 01:24:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:00.350942 | orchestrator | 2025-04-10 01:25:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:03.406451 | orchestrator | 2025-04-10 01:25:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:03.406633 | orchestrator | 2025-04-10 01:25:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:06.449436 | orchestrator | 2025-04-10 01:25:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:06.449605 | orchestrator | 2025-04-10 01:25:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:09.493623 | orchestrator | 2025-04-10 01:25:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:09.493763 | orchestrator | 2025-04-10 01:25:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:12.540705 | orchestrator | 2025-04-10 01:25:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:12.540896 | orchestrator | 2025-04-10 01:25:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:15.588393 | orchestrator | 2025-04-10 01:25:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:15.588674 | orchestrator | 2025-04-10 01:25:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:18.634298 | orchestrator | 2025-04-10 01:25:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:18.634437 | orchestrator | 2025-04-10 01:25:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:21.689011 | orchestrator | 2025-04-10 01:25:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:21.689158 | orchestrator | 2025-04-10 01:25:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:24.730387 | orchestrator | 2025-04-10 01:25:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:24.730533 | orchestrator | 2025-04-10 01:25:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:27.779848 | orchestrator | 2025-04-10 01:25:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:27.780009 | orchestrator | 2025-04-10 01:25:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:30.824292 | orchestrator | 2025-04-10 01:25:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:30.824405 | orchestrator | 2025-04-10 01:25:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:33.866780 | orchestrator | 2025-04-10 01:25:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:33.866970 | orchestrator | 2025-04-10 01:25:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:36.921999 | orchestrator | 2025-04-10 01:25:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:36.922231 | orchestrator | 2025-04-10 01:25:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:39.963678 | orchestrator | 2025-04-10 01:25:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:39.963847 | orchestrator | 2025-04-10 01:25:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:43.004864 | orchestrator | 2025-04-10 01:25:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:43.005000 | orchestrator | 2025-04-10 01:25:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:46.047939 | orchestrator | 2025-04-10 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:46.048090 | orchestrator | 2025-04-10 01:25:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:49.103626 | orchestrator | 2025-04-10 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:49.103766 | orchestrator | 2025-04-10 01:25:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:52.165683 | orchestrator | 2025-04-10 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:52.165842 | orchestrator | 2025-04-10 01:25:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:55.218724 | orchestrator | 2025-04-10 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:55.218970 | orchestrator | 2025-04-10 01:25:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:25:58.259033 | orchestrator | 2025-04-10 01:25:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:25:58.259133 | orchestrator | 2025-04-10 01:25:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:01.310003 | orchestrator | 2025-04-10 01:25:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:01.310229 | orchestrator | 2025-04-10 01:26:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:04.352389 | orchestrator | 2025-04-10 01:26:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:04.352486 | orchestrator | 2025-04-10 01:26:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:07.394895 | orchestrator | 2025-04-10 01:26:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:07.395039 | orchestrator | 2025-04-10 01:26:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:10.451136 | orchestrator | 2025-04-10 01:26:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:10.451346 | orchestrator | 2025-04-10 01:26:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:13.495983 | orchestrator | 2025-04-10 01:26:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:13.496226 | orchestrator | 2025-04-10 01:26:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:16.551897 | orchestrator | 2025-04-10 01:26:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:16.552042 | orchestrator | 2025-04-10 01:26:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:19.601557 | orchestrator | 2025-04-10 01:26:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:19.601674 | orchestrator | 2025-04-10 01:26:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:22.648623 | orchestrator | 2025-04-10 01:26:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:22.648783 | orchestrator | 2025-04-10 01:26:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:25.691684 | orchestrator | 2025-04-10 01:26:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:25.691852 | orchestrator | 2025-04-10 01:26:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:28.730473 | orchestrator | 2025-04-10 01:26:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:28.730603 | orchestrator | 2025-04-10 01:26:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:31.772420 | orchestrator | 2025-04-10 01:26:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:31.772558 | orchestrator | 2025-04-10 01:26:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:34.814585 | orchestrator | 2025-04-10 01:26:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:34.814764 | orchestrator | 2025-04-10 01:26:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:37.859439 | orchestrator | 2025-04-10 01:26:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:37.859695 | orchestrator | 2025-04-10 01:26:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:40.907163 | orchestrator | 2025-04-10 01:26:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:40.907336 | orchestrator | 2025-04-10 01:26:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:43.946410 | orchestrator | 2025-04-10 01:26:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:43.946541 | orchestrator | 2025-04-10 01:26:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:46.989866 | orchestrator | 2025-04-10 01:26:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:46.990108 | orchestrator | 2025-04-10 01:26:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:50.040915 | orchestrator | 2025-04-10 01:26:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:50.041045 | orchestrator | 2025-04-10 01:26:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:53.087235 | orchestrator | 2025-04-10 01:26:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:53.087389 | orchestrator | 2025-04-10 01:26:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:56.125827 | orchestrator | 2025-04-10 01:26:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:56.126131 | orchestrator | 2025-04-10 01:26:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:26:59.171615 | orchestrator | 2025-04-10 01:26:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:26:59.171759 | orchestrator | 2025-04-10 01:26:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:02.223050 | orchestrator | 2025-04-10 01:26:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:02.223199 | orchestrator | 2025-04-10 01:27:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:05.275525 | orchestrator | 2025-04-10 01:27:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:05.275677 | orchestrator | 2025-04-10 01:27:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:08.327010 | orchestrator | 2025-04-10 01:27:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:08.327151 | orchestrator | 2025-04-10 01:27:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:11.384414 | orchestrator | 2025-04-10 01:27:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:11.384558 | orchestrator | 2025-04-10 01:27:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:14.428070 | orchestrator | 2025-04-10 01:27:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:14.428261 | orchestrator | 2025-04-10 01:27:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:17.471791 | orchestrator | 2025-04-10 01:27:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:17.471938 | orchestrator | 2025-04-10 01:27:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:20.518567 | orchestrator | 2025-04-10 01:27:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:20.518720 | orchestrator | 2025-04-10 01:27:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:23.571482 | orchestrator | 2025-04-10 01:27:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:23.571632 | orchestrator | 2025-04-10 01:27:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:26.612970 | orchestrator | 2025-04-10 01:27:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:26.613128 | orchestrator | 2025-04-10 01:27:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:29.663541 | orchestrator | 2025-04-10 01:27:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:29.663694 | orchestrator | 2025-04-10 01:27:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:32.712597 | orchestrator | 2025-04-10 01:27:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:32.712738 | orchestrator | 2025-04-10 01:27:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:35.755107 | orchestrator | 2025-04-10 01:27:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:35.755303 | orchestrator | 2025-04-10 01:27:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:38.806281 | orchestrator | 2025-04-10 01:27:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:38.806441 | orchestrator | 2025-04-10 01:27:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:41.850806 | orchestrator | 2025-04-10 01:27:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:41.850959 | orchestrator | 2025-04-10 01:27:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:44.895554 | orchestrator | 2025-04-10 01:27:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:44.895691 | orchestrator | 2025-04-10 01:27:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:47.946853 | orchestrator | 2025-04-10 01:27:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:47.947040 | orchestrator | 2025-04-10 01:27:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:50.994438 | orchestrator | 2025-04-10 01:27:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:50.994579 | orchestrator | 2025-04-10 01:27:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:54.069984 | orchestrator | 2025-04-10 01:27:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:54.070213 | orchestrator | 2025-04-10 01:27:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:27:57.115302 | orchestrator | 2025-04-10 01:27:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:27:57.115426 | orchestrator | 2025-04-10 01:27:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:00.159521 | orchestrator | 2025-04-10 01:27:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:00.159750 | orchestrator | 2025-04-10 01:28:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:03.203065 | orchestrator | 2025-04-10 01:28:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:03.203221 | orchestrator | 2025-04-10 01:28:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:06.253637 | orchestrator | 2025-04-10 01:28:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:06.253785 | orchestrator | 2025-04-10 01:28:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:09.302874 | orchestrator | 2025-04-10 01:28:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:09.302991 | orchestrator | 2025-04-10 01:28:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:12.356899 | orchestrator | 2025-04-10 01:28:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:12.357046 | orchestrator | 2025-04-10 01:28:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:15.410608 | orchestrator | 2025-04-10 01:28:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:15.410746 | orchestrator | 2025-04-10 01:28:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:18.453997 | orchestrator | 2025-04-10 01:28:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:18.454248 | orchestrator | 2025-04-10 01:28:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:21.498232 | orchestrator | 2025-04-10 01:28:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:21.498375 | orchestrator | 2025-04-10 01:28:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:24.544603 | orchestrator | 2025-04-10 01:28:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:24.544747 | orchestrator | 2025-04-10 01:28:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:27.597422 | orchestrator | 2025-04-10 01:28:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:27.597635 | orchestrator | 2025-04-10 01:28:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:30.641110 | orchestrator | 2025-04-10 01:28:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:30.641292 | orchestrator | 2025-04-10 01:28:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:33.693171 | orchestrator | 2025-04-10 01:28:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:33.693335 | orchestrator | 2025-04-10 01:28:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:36.746132 | orchestrator | 2025-04-10 01:28:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:36.746319 | orchestrator | 2025-04-10 01:28:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:39.793839 | orchestrator | 2025-04-10 01:28:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:39.793974 | orchestrator | 2025-04-10 01:28:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:42.845149 | orchestrator | 2025-04-10 01:28:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:42.845345 | orchestrator | 2025-04-10 01:28:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:45.895043 | orchestrator | 2025-04-10 01:28:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:45.895147 | orchestrator | 2025-04-10 01:28:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:48.939762 | orchestrator | 2025-04-10 01:28:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:48.939903 | orchestrator | 2025-04-10 01:28:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:51.985803 | orchestrator | 2025-04-10 01:28:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:51.985905 | orchestrator | 2025-04-10 01:28:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:55.029841 | orchestrator | 2025-04-10 01:28:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:55.029974 | orchestrator | 2025-04-10 01:28:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:28:58.095234 | orchestrator | 2025-04-10 01:28:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:28:58.095391 | orchestrator | 2025-04-10 01:28:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:01.145318 | orchestrator | 2025-04-10 01:28:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:01.145491 | orchestrator | 2025-04-10 01:29:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:04.187877 | orchestrator | 2025-04-10 01:29:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:04.187991 | orchestrator | 2025-04-10 01:29:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:07.235799 | orchestrator | 2025-04-10 01:29:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:07.235953 | orchestrator | 2025-04-10 01:29:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:10.293245 | orchestrator | 2025-04-10 01:29:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:10.293389 | orchestrator | 2025-04-10 01:29:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:13.345120 | orchestrator | 2025-04-10 01:29:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:13.345338 | orchestrator | 2025-04-10 01:29:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:13.346328 | orchestrator | 2025-04-10 01:29:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:16.398389 | orchestrator | 2025-04-10 01:29:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:19.442616 | orchestrator | 2025-04-10 01:29:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:19.442762 | orchestrator | 2025-04-10 01:29:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:22.497302 | orchestrator | 2025-04-10 01:29:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:22.497447 | orchestrator | 2025-04-10 01:29:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:25.537603 | orchestrator | 2025-04-10 01:29:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:25.537752 | orchestrator | 2025-04-10 01:29:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:28.576910 | orchestrator | 2025-04-10 01:29:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:28.577049 | orchestrator | 2025-04-10 01:29:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:31.621257 | orchestrator | 2025-04-10 01:29:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:31.622165 | orchestrator | 2025-04-10 01:29:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:34.672000 | orchestrator | 2025-04-10 01:29:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:34.672146 | orchestrator | 2025-04-10 01:29:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:37.716607 | orchestrator | 2025-04-10 01:29:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:37.716782 | orchestrator | 2025-04-10 01:29:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:40.764552 | orchestrator | 2025-04-10 01:29:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:40.764690 | orchestrator | 2025-04-10 01:29:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:43.810659 | orchestrator | 2025-04-10 01:29:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:43.810846 | orchestrator | 2025-04-10 01:29:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:46.852313 | orchestrator | 2025-04-10 01:29:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:46.852454 | orchestrator | 2025-04-10 01:29:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:49.896951 | orchestrator | 2025-04-10 01:29:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:49.897100 | orchestrator | 2025-04-10 01:29:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:52.947230 | orchestrator | 2025-04-10 01:29:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:52.947376 | orchestrator | 2025-04-10 01:29:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:55.992526 | orchestrator | 2025-04-10 01:29:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:55.992665 | orchestrator | 2025-04-10 01:29:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:29:59.044415 | orchestrator | 2025-04-10 01:29:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:29:59.044585 | orchestrator | 2025-04-10 01:29:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:02.090830 | orchestrator | 2025-04-10 01:29:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:02.090974 | orchestrator | 2025-04-10 01:30:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:05.134974 | orchestrator | 2025-04-10 01:30:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:05.135117 | orchestrator | 2025-04-10 01:30:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:08.188545 | orchestrator | 2025-04-10 01:30:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:08.188645 | orchestrator | 2025-04-10 01:30:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:11.241619 | orchestrator | 2025-04-10 01:30:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:11.241758 | orchestrator | 2025-04-10 01:30:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:14.289526 | orchestrator | 2025-04-10 01:30:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:14.289648 | orchestrator | 2025-04-10 01:30:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:17.346354 | orchestrator | 2025-04-10 01:30:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:17.346513 | orchestrator | 2025-04-10 01:30:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:20.391080 | orchestrator | 2025-04-10 01:30:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:20.391264 | orchestrator | 2025-04-10 01:30:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:23.440921 | orchestrator | 2025-04-10 01:30:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:23.441068 | orchestrator | 2025-04-10 01:30:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:26.489070 | orchestrator | 2025-04-10 01:30:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:26.489252 | orchestrator | 2025-04-10 01:30:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:29.531321 | orchestrator | 2025-04-10 01:30:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:29.531475 | orchestrator | 2025-04-10 01:30:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:32.572381 | orchestrator | 2025-04-10 01:30:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:32.572534 | orchestrator | 2025-04-10 01:30:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:35.612016 | orchestrator | 2025-04-10 01:30:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:35.612161 | orchestrator | 2025-04-10 01:30:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:38.660355 | orchestrator | 2025-04-10 01:30:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:38.660510 | orchestrator | 2025-04-10 01:30:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:41.701095 | orchestrator | 2025-04-10 01:30:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:41.701318 | orchestrator | 2025-04-10 01:30:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:44.747487 | orchestrator | 2025-04-10 01:30:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:44.747668 | orchestrator | 2025-04-10 01:30:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:47.797312 | orchestrator | 2025-04-10 01:30:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:47.797454 | orchestrator | 2025-04-10 01:30:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:50.839288 | orchestrator | 2025-04-10 01:30:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:50.839433 | orchestrator | 2025-04-10 01:30:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:53.884713 | orchestrator | 2025-04-10 01:30:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:53.884834 | orchestrator | 2025-04-10 01:30:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:56.937460 | orchestrator | 2025-04-10 01:30:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:56.937607 | orchestrator | 2025-04-10 01:30:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:30:59.981006 | orchestrator | 2025-04-10 01:30:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:30:59.981233 | orchestrator | 2025-04-10 01:30:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:03.022765 | orchestrator | 2025-04-10 01:30:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:03.022899 | orchestrator | 2025-04-10 01:31:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:06.067948 | orchestrator | 2025-04-10 01:31:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:06.068091 | orchestrator | 2025-04-10 01:31:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:09.117698 | orchestrator | 2025-04-10 01:31:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:09.117812 | orchestrator | 2025-04-10 01:31:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:12.173388 | orchestrator | 2025-04-10 01:31:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:12.173534 | orchestrator | 2025-04-10 01:31:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:15.222447 | orchestrator | 2025-04-10 01:31:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:15.222602 | orchestrator | 2025-04-10 01:31:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:18.274751 | orchestrator | 2025-04-10 01:31:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:18.274893 | orchestrator | 2025-04-10 01:31:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:21.328910 | orchestrator | 2025-04-10 01:31:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:21.329030 | orchestrator | 2025-04-10 01:31:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:24.377829 | orchestrator | 2025-04-10 01:31:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:24.377974 | orchestrator | 2025-04-10 01:31:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:27.436771 | orchestrator | 2025-04-10 01:31:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:27.436898 | orchestrator | 2025-04-10 01:31:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:30.489045 | orchestrator | 2025-04-10 01:31:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:30.489230 | orchestrator | 2025-04-10 01:31:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:33.539105 | orchestrator | 2025-04-10 01:31:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:33.539296 | orchestrator | 2025-04-10 01:31:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:36.595579 | orchestrator | 2025-04-10 01:31:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:36.595728 | orchestrator | 2025-04-10 01:31:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:39.640143 | orchestrator | 2025-04-10 01:31:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:39.640329 | orchestrator | 2025-04-10 01:31:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:42.686402 | orchestrator | 2025-04-10 01:31:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:42.687318 | orchestrator | 2025-04-10 01:31:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:45.735348 | orchestrator | 2025-04-10 01:31:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:45.735495 | orchestrator | 2025-04-10 01:31:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:48.782810 | orchestrator | 2025-04-10 01:31:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:48.782954 | orchestrator | 2025-04-10 01:31:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:51.827171 | orchestrator | 2025-04-10 01:31:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:51.827360 | orchestrator | 2025-04-10 01:31:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:54.877144 | orchestrator | 2025-04-10 01:31:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:54.877310 | orchestrator | 2025-04-10 01:31:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:31:57.933912 | orchestrator | 2025-04-10 01:31:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:31:57.934118 | orchestrator | 2025-04-10 01:31:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:00.975641 | orchestrator | 2025-04-10 01:31:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:00.975788 | orchestrator | 2025-04-10 01:32:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:04.017743 | orchestrator | 2025-04-10 01:32:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:04.017921 | orchestrator | 2025-04-10 01:32:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:07.063014 | orchestrator | 2025-04-10 01:32:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:07.063145 | orchestrator | 2025-04-10 01:32:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:10.103669 | orchestrator | 2025-04-10 01:32:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:10.103810 | orchestrator | 2025-04-10 01:32:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:13.155011 | orchestrator | 2025-04-10 01:32:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:13.155155 | orchestrator | 2025-04-10 01:32:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:16.204514 | orchestrator | 2025-04-10 01:32:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:16.204696 | orchestrator | 2025-04-10 01:32:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:19.254320 | orchestrator | 2025-04-10 01:32:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:19.254466 | orchestrator | 2025-04-10 01:32:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:22.299671 | orchestrator | 2025-04-10 01:32:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:22.299817 | orchestrator | 2025-04-10 01:32:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:25.350337 | orchestrator | 2025-04-10 01:32:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:25.350487 | orchestrator | 2025-04-10 01:32:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:28.407019 | orchestrator | 2025-04-10 01:32:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:28.407170 | orchestrator | 2025-04-10 01:32:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:31.469559 | orchestrator | 2025-04-10 01:32:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:31.469703 | orchestrator | 2025-04-10 01:32:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:34.511648 | orchestrator | 2025-04-10 01:32:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:34.511799 | orchestrator | 2025-04-10 01:32:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:37.561735 | orchestrator | 2025-04-10 01:32:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:37.561883 | orchestrator | 2025-04-10 01:32:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:40.604112 | orchestrator | 2025-04-10 01:32:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:40.604293 | orchestrator | 2025-04-10 01:32:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:43.656869 | orchestrator | 2025-04-10 01:32:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:43.656988 | orchestrator | 2025-04-10 01:32:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:46.700787 | orchestrator | 2025-04-10 01:32:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:46.700937 | orchestrator | 2025-04-10 01:32:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:49.749369 | orchestrator | 2025-04-10 01:32:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:49.749579 | orchestrator | 2025-04-10 01:32:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:52.801318 | orchestrator | 2025-04-10 01:32:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:52.801499 | orchestrator | 2025-04-10 01:32:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:55.849851 | orchestrator | 2025-04-10 01:32:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:55.849992 | orchestrator | 2025-04-10 01:32:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:32:58.886438 | orchestrator | 2025-04-10 01:32:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:32:58.886556 | orchestrator | 2025-04-10 01:32:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:01.931373 | orchestrator | 2025-04-10 01:32:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:01.931540 | orchestrator | 2025-04-10 01:33:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:04.971776 | orchestrator | 2025-04-10 01:33:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:04.971878 | orchestrator | 2025-04-10 01:33:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:08.025314 | orchestrator | 2025-04-10 01:33:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:08.025458 | orchestrator | 2025-04-10 01:33:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:08.026069 | orchestrator | 2025-04-10 01:33:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:11.075480 | orchestrator | 2025-04-10 01:33:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:14.118779 | orchestrator | 2025-04-10 01:33:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:14.118928 | orchestrator | 2025-04-10 01:33:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:17.168782 | orchestrator | 2025-04-10 01:33:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:17.168927 | orchestrator | 2025-04-10 01:33:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:20.216825 | orchestrator | 2025-04-10 01:33:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:20.216933 | orchestrator | 2025-04-10 01:33:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:23.262395 | orchestrator | 2025-04-10 01:33:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:23.262540 | orchestrator | 2025-04-10 01:33:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:26.306203 | orchestrator | 2025-04-10 01:33:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:26.306318 | orchestrator | 2025-04-10 01:33:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:29.353059 | orchestrator | 2025-04-10 01:33:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:29.353238 | orchestrator | 2025-04-10 01:33:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:32.400066 | orchestrator | 2025-04-10 01:33:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:32.400255 | orchestrator | 2025-04-10 01:33:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:35.444587 | orchestrator | 2025-04-10 01:33:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:35.444706 | orchestrator | 2025-04-10 01:33:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:38.486691 | orchestrator | 2025-04-10 01:33:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:38.486812 | orchestrator | 2025-04-10 01:33:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:41.535712 | orchestrator | 2025-04-10 01:33:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:41.535860 | orchestrator | 2025-04-10 01:33:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:44.582822 | orchestrator | 2025-04-10 01:33:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:44.582969 | orchestrator | 2025-04-10 01:33:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:47.624701 | orchestrator | 2025-04-10 01:33:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:47.624879 | orchestrator | 2025-04-10 01:33:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:50.675333 | orchestrator | 2025-04-10 01:33:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:50.675484 | orchestrator | 2025-04-10 01:33:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:53.720532 | orchestrator | 2025-04-10 01:33:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:53.720676 | orchestrator | 2025-04-10 01:33:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:56.781692 | orchestrator | 2025-04-10 01:33:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:56.781936 | orchestrator | 2025-04-10 01:33:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:33:59.829967 | orchestrator | 2025-04-10 01:33:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:33:59.830169 | orchestrator | 2025-04-10 01:33:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:02.876671 | orchestrator | 2025-04-10 01:33:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:02.876788 | orchestrator | 2025-04-10 01:34:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:05.918804 | orchestrator | 2025-04-10 01:34:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:05.918952 | orchestrator | 2025-04-10 01:34:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:08.961105 | orchestrator | 2025-04-10 01:34:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:08.961305 | orchestrator | 2025-04-10 01:34:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:12.019751 | orchestrator | 2025-04-10 01:34:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:12.019892 | orchestrator | 2025-04-10 01:34:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:15.063122 | orchestrator | 2025-04-10 01:34:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:15.063280 | orchestrator | 2025-04-10 01:34:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:18.115421 | orchestrator | 2025-04-10 01:34:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:18.115571 | orchestrator | 2025-04-10 01:34:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:21.164938 | orchestrator | 2025-04-10 01:34:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:21.165084 | orchestrator | 2025-04-10 01:34:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:24.214606 | orchestrator | 2025-04-10 01:34:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:24.214752 | orchestrator | 2025-04-10 01:34:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:27.259417 | orchestrator | 2025-04-10 01:34:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:27.259561 | orchestrator | 2025-04-10 01:34:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:30.314463 | orchestrator | 2025-04-10 01:34:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:30.314661 | orchestrator | 2025-04-10 01:34:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:33.361407 | orchestrator | 2025-04-10 01:34:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:33.361588 | orchestrator | 2025-04-10 01:34:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:36.412768 | orchestrator | 2025-04-10 01:34:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:36.412908 | orchestrator | 2025-04-10 01:34:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:36.413860 | orchestrator | 2025-04-10 01:34:36 | INFO  | Task 1a7f5f73-2ae6-4fee-abb2-6b442d54e636 is in state STARTED 2025-04-10 01:34:39.479970 | orchestrator | 2025-04-10 01:34:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:39.480119 | orchestrator | 2025-04-10 01:34:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:42.553045 | orchestrator | 2025-04-10 01:34:39 | INFO  | Task 1a7f5f73-2ae6-4fee-abb2-6b442d54e636 is in state STARTED 2025-04-10 01:34:42.553172 | orchestrator | 2025-04-10 01:34:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:42.553210 | orchestrator | 2025-04-10 01:34:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:42.554824 | orchestrator | 2025-04-10 01:34:42 | INFO  | Task 1a7f5f73-2ae6-4fee-abb2-6b442d54e636 is in state STARTED 2025-04-10 01:34:45.607558 | orchestrator | 2025-04-10 01:34:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:45.607698 | orchestrator | 2025-04-10 01:34:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:45.608339 | orchestrator | 2025-04-10 01:34:45 | INFO  | Task 1a7f5f73-2ae6-4fee-abb2-6b442d54e636 is in state STARTED 2025-04-10 01:34:45.608545 | orchestrator | 2025-04-10 01:34:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:48.653958 | orchestrator | 2025-04-10 01:34:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:48.654491 | orchestrator | 2025-04-10 01:34:48 | INFO  | Task 1a7f5f73-2ae6-4fee-abb2-6b442d54e636 is in state SUCCESS 2025-04-10 01:34:51.701809 | orchestrator | 2025-04-10 01:34:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:51.701961 | orchestrator | 2025-04-10 01:34:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:54.747792 | orchestrator | 2025-04-10 01:34:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:54.747904 | orchestrator | 2025-04-10 01:34:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:34:57.802447 | orchestrator | 2025-04-10 01:34:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:34:57.802689 | orchestrator | 2025-04-10 01:34:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:00.846405 | orchestrator | 2025-04-10 01:34:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:00.846540 | orchestrator | 2025-04-10 01:35:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:03.890178 | orchestrator | 2025-04-10 01:35:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:03.890406 | orchestrator | 2025-04-10 01:35:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:06.938201 | orchestrator | 2025-04-10 01:35:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:06.938390 | orchestrator | 2025-04-10 01:35:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:09.982765 | orchestrator | 2025-04-10 01:35:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:09.982945 | orchestrator | 2025-04-10 01:35:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:13.024418 | orchestrator | 2025-04-10 01:35:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:13.024532 | orchestrator | 2025-04-10 01:35:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:16.076428 | orchestrator | 2025-04-10 01:35:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:16.076574 | orchestrator | 2025-04-10 01:35:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:19.123512 | orchestrator | 2025-04-10 01:35:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:19.123656 | orchestrator | 2025-04-10 01:35:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:22.167321 | orchestrator | 2025-04-10 01:35:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:22.167475 | orchestrator | 2025-04-10 01:35:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:25.213620 | orchestrator | 2025-04-10 01:35:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:25.213717 | orchestrator | 2025-04-10 01:35:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:28.270840 | orchestrator | 2025-04-10 01:35:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:28.270949 | orchestrator | 2025-04-10 01:35:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:31.318552 | orchestrator | 2025-04-10 01:35:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:31.318703 | orchestrator | 2025-04-10 01:35:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:34.373344 | orchestrator | 2025-04-10 01:35:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:34.373481 | orchestrator | 2025-04-10 01:35:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:37.414892 | orchestrator | 2025-04-10 01:35:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:37.415054 | orchestrator | 2025-04-10 01:35:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:40.457807 | orchestrator | 2025-04-10 01:35:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:40.457905 | orchestrator | 2025-04-10 01:35:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:43.509055 | orchestrator | 2025-04-10 01:35:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:43.509217 | orchestrator | 2025-04-10 01:35:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:46.559941 | orchestrator | 2025-04-10 01:35:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:46.560083 | orchestrator | 2025-04-10 01:35:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:49.608909 | orchestrator | 2025-04-10 01:35:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:49.609059 | orchestrator | 2025-04-10 01:35:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:52.654094 | orchestrator | 2025-04-10 01:35:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:52.654304 | orchestrator | 2025-04-10 01:35:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:55.712439 | orchestrator | 2025-04-10 01:35:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:55.712609 | orchestrator | 2025-04-10 01:35:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:35:58.752563 | orchestrator | 2025-04-10 01:35:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:35:58.752708 | orchestrator | 2025-04-10 01:35:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:01.792828 | orchestrator | 2025-04-10 01:35:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:01.793041 | orchestrator | 2025-04-10 01:36:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:04.841811 | orchestrator | 2025-04-10 01:36:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:04.841987 | orchestrator | 2025-04-10 01:36:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:07.886446 | orchestrator | 2025-04-10 01:36:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:07.886575 | orchestrator | 2025-04-10 01:36:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:10.933729 | orchestrator | 2025-04-10 01:36:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:10.933869 | orchestrator | 2025-04-10 01:36:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:13.977505 | orchestrator | 2025-04-10 01:36:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:13.977644 | orchestrator | 2025-04-10 01:36:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:17.023469 | orchestrator | 2025-04-10 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:17.023615 | orchestrator | 2025-04-10 01:36:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:20.070791 | orchestrator | 2025-04-10 01:36:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:20.070943 | orchestrator | 2025-04-10 01:36:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:23.122632 | orchestrator | 2025-04-10 01:36:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:23.122748 | orchestrator | 2025-04-10 01:36:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:26.177912 | orchestrator | 2025-04-10 01:36:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:26.178112 | orchestrator | 2025-04-10 01:36:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:29.228561 | orchestrator | 2025-04-10 01:36:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:29.228760 | orchestrator | 2025-04-10 01:36:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:32.276947 | orchestrator | 2025-04-10 01:36:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:32.277098 | orchestrator | 2025-04-10 01:36:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:35.317074 | orchestrator | 2025-04-10 01:36:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:35.317218 | orchestrator | 2025-04-10 01:36:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:38.354712 | orchestrator | 2025-04-10 01:36:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:38.354859 | orchestrator | 2025-04-10 01:36:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:41.404782 | orchestrator | 2025-04-10 01:36:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:41.404958 | orchestrator | 2025-04-10 01:36:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:44.452437 | orchestrator | 2025-04-10 01:36:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:44.452588 | orchestrator | 2025-04-10 01:36:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:47.501765 | orchestrator | 2025-04-10 01:36:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:47.501929 | orchestrator | 2025-04-10 01:36:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:50.545450 | orchestrator | 2025-04-10 01:36:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:50.545635 | orchestrator | 2025-04-10 01:36:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:53.585957 | orchestrator | 2025-04-10 01:36:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:53.586153 | orchestrator | 2025-04-10 01:36:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:56.631605 | orchestrator | 2025-04-10 01:36:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:56.631753 | orchestrator | 2025-04-10 01:36:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:36:59.676527 | orchestrator | 2025-04-10 01:36:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:36:59.676674 | orchestrator | 2025-04-10 01:36:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:02.727538 | orchestrator | 2025-04-10 01:36:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:02.727690 | orchestrator | 2025-04-10 01:37:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:05.776704 | orchestrator | 2025-04-10 01:37:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:05.776840 | orchestrator | 2025-04-10 01:37:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:08.824420 | orchestrator | 2025-04-10 01:37:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:08.824533 | orchestrator | 2025-04-10 01:37:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:11.860234 | orchestrator | 2025-04-10 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:11.860444 | orchestrator | 2025-04-10 01:37:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:14.912399 | orchestrator | 2025-04-10 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:14.912547 | orchestrator | 2025-04-10 01:37:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:17.964479 | orchestrator | 2025-04-10 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:17.964592 | orchestrator | 2025-04-10 01:37:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:21.007369 | orchestrator | 2025-04-10 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:21.007550 | orchestrator | 2025-04-10 01:37:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:24.052339 | orchestrator | 2025-04-10 01:37:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:24.052484 | orchestrator | 2025-04-10 01:37:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:27.097565 | orchestrator | 2025-04-10 01:37:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:27.097740 | orchestrator | 2025-04-10 01:37:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:30.139036 | orchestrator | 2025-04-10 01:37:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:30.139178 | orchestrator | 2025-04-10 01:37:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:33.185096 | orchestrator | 2025-04-10 01:37:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:33.185284 | orchestrator | 2025-04-10 01:37:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:36.237717 | orchestrator | 2025-04-10 01:37:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:36.238101 | orchestrator | 2025-04-10 01:37:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:39.287722 | orchestrator | 2025-04-10 01:37:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:39.287820 | orchestrator | 2025-04-10 01:37:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:42.336006 | orchestrator | 2025-04-10 01:37:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:42.336152 | orchestrator | 2025-04-10 01:37:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:45.381021 | orchestrator | 2025-04-10 01:37:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:45.381188 | orchestrator | 2025-04-10 01:37:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:48.431111 | orchestrator | 2025-04-10 01:37:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:48.431293 | orchestrator | 2025-04-10 01:37:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:51.485815 | orchestrator | 2025-04-10 01:37:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:51.485940 | orchestrator | 2025-04-10 01:37:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:54.540441 | orchestrator | 2025-04-10 01:37:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:54.540571 | orchestrator | 2025-04-10 01:37:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:37:57.590871 | orchestrator | 2025-04-10 01:37:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:37:57.590997 | orchestrator | 2025-04-10 01:37:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:00.632795 | orchestrator | 2025-04-10 01:37:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:00.632890 | orchestrator | 2025-04-10 01:38:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:03.688456 | orchestrator | 2025-04-10 01:38:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:03.688537 | orchestrator | 2025-04-10 01:38:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:06.734736 | orchestrator | 2025-04-10 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:06.734888 | orchestrator | 2025-04-10 01:38:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:09.780284 | orchestrator | 2025-04-10 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:09.780415 | orchestrator | 2025-04-10 01:38:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:12.834198 | orchestrator | 2025-04-10 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:12.834412 | orchestrator | 2025-04-10 01:38:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:15.878371 | orchestrator | 2025-04-10 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:15.878464 | orchestrator | 2025-04-10 01:38:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:18.934702 | orchestrator | 2025-04-10 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:18.934788 | orchestrator | 2025-04-10 01:38:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:22.002512 | orchestrator | 2025-04-10 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:22.002638 | orchestrator | 2025-04-10 01:38:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:25.056702 | orchestrator | 2025-04-10 01:38:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:25.056807 | orchestrator | 2025-04-10 01:38:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:28.101541 | orchestrator | 2025-04-10 01:38:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:28.101653 | orchestrator | 2025-04-10 01:38:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:31.147514 | orchestrator | 2025-04-10 01:38:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:31.147666 | orchestrator | 2025-04-10 01:38:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:34.196538 | orchestrator | 2025-04-10 01:38:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:34.196637 | orchestrator | 2025-04-10 01:38:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:37.249203 | orchestrator | 2025-04-10 01:38:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:37.249336 | orchestrator | 2025-04-10 01:38:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:40.298096 | orchestrator | 2025-04-10 01:38:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:40.298281 | orchestrator | 2025-04-10 01:38:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:43.345623 | orchestrator | 2025-04-10 01:38:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:43.345756 | orchestrator | 2025-04-10 01:38:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:46.393579 | orchestrator | 2025-04-10 01:38:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:46.393718 | orchestrator | 2025-04-10 01:38:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:49.443935 | orchestrator | 2025-04-10 01:38:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:49.444122 | orchestrator | 2025-04-10 01:38:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:52.498973 | orchestrator | 2025-04-10 01:38:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:52.499118 | orchestrator | 2025-04-10 01:38:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:55.547687 | orchestrator | 2025-04-10 01:38:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:55.547839 | orchestrator | 2025-04-10 01:38:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:38:58.588714 | orchestrator | 2025-04-10 01:38:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:38:58.588915 | orchestrator | 2025-04-10 01:38:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:01.632650 | orchestrator | 2025-04-10 01:38:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:01.632791 | orchestrator | 2025-04-10 01:39:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:04.676304 | orchestrator | 2025-04-10 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:04.676471 | orchestrator | 2025-04-10 01:39:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:07.721451 | orchestrator | 2025-04-10 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:07.721596 | orchestrator | 2025-04-10 01:39:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:10.768970 | orchestrator | 2025-04-10 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:10.769131 | orchestrator | 2025-04-10 01:39:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:13.820776 | orchestrator | 2025-04-10 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:13.820921 | orchestrator | 2025-04-10 01:39:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:16.862467 | orchestrator | 2025-04-10 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:16.862657 | orchestrator | 2025-04-10 01:39:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:19.904805 | orchestrator | 2025-04-10 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:19.904949 | orchestrator | 2025-04-10 01:39:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:22.953391 | orchestrator | 2025-04-10 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:22.953519 | orchestrator | 2025-04-10 01:39:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:25.998691 | orchestrator | 2025-04-10 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:25.998836 | orchestrator | 2025-04-10 01:39:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:29.040732 | orchestrator | 2025-04-10 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:29.040901 | orchestrator | 2025-04-10 01:39:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:32.090162 | orchestrator | 2025-04-10 01:39:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:32.090326 | orchestrator | 2025-04-10 01:39:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:35.135981 | orchestrator | 2025-04-10 01:39:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:35.136184 | orchestrator | 2025-04-10 01:39:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:38.184621 | orchestrator | 2025-04-10 01:39:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:38.184789 | orchestrator | 2025-04-10 01:39:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:41.223700 | orchestrator | 2025-04-10 01:39:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:41.223844 | orchestrator | 2025-04-10 01:39:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:44.280014 | orchestrator | 2025-04-10 01:39:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:44.280173 | orchestrator | 2025-04-10 01:39:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:44.281065 | orchestrator | 2025-04-10 01:39:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:47.342910 | orchestrator | 2025-04-10 01:39:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:50.388093 | orchestrator | 2025-04-10 01:39:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:50.388256 | orchestrator | 2025-04-10 01:39:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:53.447588 | orchestrator | 2025-04-10 01:39:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:53.447737 | orchestrator | 2025-04-10 01:39:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:56.504036 | orchestrator | 2025-04-10 01:39:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:56.504151 | orchestrator | 2025-04-10 01:39:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:39:59.556407 | orchestrator | 2025-04-10 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:39:59.556561 | orchestrator | 2025-04-10 01:39:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:02.599523 | orchestrator | 2025-04-10 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:02.599684 | orchestrator | 2025-04-10 01:40:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:05.647547 | orchestrator | 2025-04-10 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:05.647681 | orchestrator | 2025-04-10 01:40:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:08.696732 | orchestrator | 2025-04-10 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:08.696885 | orchestrator | 2025-04-10 01:40:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:11.747026 | orchestrator | 2025-04-10 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:11.747173 | orchestrator | 2025-04-10 01:40:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:14.789547 | orchestrator | 2025-04-10 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:14.789696 | orchestrator | 2025-04-10 01:40:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:17.839608 | orchestrator | 2025-04-10 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:17.839757 | orchestrator | 2025-04-10 01:40:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:20.886710 | orchestrator | 2025-04-10 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:20.886851 | orchestrator | 2025-04-10 01:40:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:23.942439 | orchestrator | 2025-04-10 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:23.942608 | orchestrator | 2025-04-10 01:40:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:26.985292 | orchestrator | 2025-04-10 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:26.985438 | orchestrator | 2025-04-10 01:40:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:30.029625 | orchestrator | 2025-04-10 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:30.029774 | orchestrator | 2025-04-10 01:40:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:33.078360 | orchestrator | 2025-04-10 01:40:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:33.078496 | orchestrator | 2025-04-10 01:40:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:36.120467 | orchestrator | 2025-04-10 01:40:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:36.120578 | orchestrator | 2025-04-10 01:40:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:39.170500 | orchestrator | 2025-04-10 01:40:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:39.170651 | orchestrator | 2025-04-10 01:40:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:42.220926 | orchestrator | 2025-04-10 01:40:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:42.221074 | orchestrator | 2025-04-10 01:40:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:45.269568 | orchestrator | 2025-04-10 01:40:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:45.269702 | orchestrator | 2025-04-10 01:40:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:48.318731 | orchestrator | 2025-04-10 01:40:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:48.318931 | orchestrator | 2025-04-10 01:40:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:51.362382 | orchestrator | 2025-04-10 01:40:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:51.362538 | orchestrator | 2025-04-10 01:40:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:54.406802 | orchestrator | 2025-04-10 01:40:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:54.406903 | orchestrator | 2025-04-10 01:40:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:40:57.451624 | orchestrator | 2025-04-10 01:40:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:40:57.451774 | orchestrator | 2025-04-10 01:40:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:00.496605 | orchestrator | 2025-04-10 01:40:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:00.496749 | orchestrator | 2025-04-10 01:41:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:03.544954 | orchestrator | 2025-04-10 01:41:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:03.545131 | orchestrator | 2025-04-10 01:41:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:06.603119 | orchestrator | 2025-04-10 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:06.603330 | orchestrator | 2025-04-10 01:41:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:09.654795 | orchestrator | 2025-04-10 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:09.654950 | orchestrator | 2025-04-10 01:41:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:09.655232 | orchestrator | 2025-04-10 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:12.699713 | orchestrator | 2025-04-10 01:41:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:15.752378 | orchestrator | 2025-04-10 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:15.752523 | orchestrator | 2025-04-10 01:41:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:18.796575 | orchestrator | 2025-04-10 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:18.796704 | orchestrator | 2025-04-10 01:41:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:21.849033 | orchestrator | 2025-04-10 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:21.849229 | orchestrator | 2025-04-10 01:41:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:24.891504 | orchestrator | 2025-04-10 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:24.891662 | orchestrator | 2025-04-10 01:41:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:27.937801 | orchestrator | 2025-04-10 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:27.937943 | orchestrator | 2025-04-10 01:41:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:30.990941 | orchestrator | 2025-04-10 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:30.991110 | orchestrator | 2025-04-10 01:41:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:34.042630 | orchestrator | 2025-04-10 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:34.042772 | orchestrator | 2025-04-10 01:41:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:37.103130 | orchestrator | 2025-04-10 01:41:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:37.103326 | orchestrator | 2025-04-10 01:41:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:40.160631 | orchestrator | 2025-04-10 01:41:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:40.160807 | orchestrator | 2025-04-10 01:41:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:43.212630 | orchestrator | 2025-04-10 01:41:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:43.212745 | orchestrator | 2025-04-10 01:41:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:46.269540 | orchestrator | 2025-04-10 01:41:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:46.269684 | orchestrator | 2025-04-10 01:41:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:49.317557 | orchestrator | 2025-04-10 01:41:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:49.317711 | orchestrator | 2025-04-10 01:41:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:52.364754 | orchestrator | 2025-04-10 01:41:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:52.364899 | orchestrator | 2025-04-10 01:41:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:55.410499 | orchestrator | 2025-04-10 01:41:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:55.410646 | orchestrator | 2025-04-10 01:41:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:41:58.457701 | orchestrator | 2025-04-10 01:41:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:41:58.457835 | orchestrator | 2025-04-10 01:41:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:01.513923 | orchestrator | 2025-04-10 01:41:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:01.514179 | orchestrator | 2025-04-10 01:42:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:04.567711 | orchestrator | 2025-04-10 01:42:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:04.567855 | orchestrator | 2025-04-10 01:42:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:07.614195 | orchestrator | 2025-04-10 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:07.614371 | orchestrator | 2025-04-10 01:42:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:10.651608 | orchestrator | 2025-04-10 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:10.651724 | orchestrator | 2025-04-10 01:42:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:13.694872 | orchestrator | 2025-04-10 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:13.695019 | orchestrator | 2025-04-10 01:42:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:16.743252 | orchestrator | 2025-04-10 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:16.743374 | orchestrator | 2025-04-10 01:42:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:19.789591 | orchestrator | 2025-04-10 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:19.789740 | orchestrator | 2025-04-10 01:42:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:22.837746 | orchestrator | 2025-04-10 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:22.837890 | orchestrator | 2025-04-10 01:42:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:25.886493 | orchestrator | 2025-04-10 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:25.886643 | orchestrator | 2025-04-10 01:42:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:28.923745 | orchestrator | 2025-04-10 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:28.923894 | orchestrator | 2025-04-10 01:42:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:31.973789 | orchestrator | 2025-04-10 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:31.973941 | orchestrator | 2025-04-10 01:42:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:35.013535 | orchestrator | 2025-04-10 01:42:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:35.013680 | orchestrator | 2025-04-10 01:42:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:38.060352 | orchestrator | 2025-04-10 01:42:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:38.060501 | orchestrator | 2025-04-10 01:42:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:38.062184 | orchestrator | 2025-04-10 01:42:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:41.115992 | orchestrator | 2025-04-10 01:42:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:44.161769 | orchestrator | 2025-04-10 01:42:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:44.161917 | orchestrator | 2025-04-10 01:42:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:47.209939 | orchestrator | 2025-04-10 01:42:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:47.210223 | orchestrator | 2025-04-10 01:42:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:50.259032 | orchestrator | 2025-04-10 01:42:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:50.259221 | orchestrator | 2025-04-10 01:42:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:53.302432 | orchestrator | 2025-04-10 01:42:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:53.302583 | orchestrator | 2025-04-10 01:42:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:56.361759 | orchestrator | 2025-04-10 01:42:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:56.361992 | orchestrator | 2025-04-10 01:42:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:42:59.413450 | orchestrator | 2025-04-10 01:42:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:42:59.413564 | orchestrator | 2025-04-10 01:42:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:02.457926 | orchestrator | 2025-04-10 01:42:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:02.458206 | orchestrator | 2025-04-10 01:43:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:05.503399 | orchestrator | 2025-04-10 01:43:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:05.503579 | orchestrator | 2025-04-10 01:43:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:08.550615 | orchestrator | 2025-04-10 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:08.550785 | orchestrator | 2025-04-10 01:43:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:11.600776 | orchestrator | 2025-04-10 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:11.601007 | orchestrator | 2025-04-10 01:43:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:14.644226 | orchestrator | 2025-04-10 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:14.644364 | orchestrator | 2025-04-10 01:43:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:17.685805 | orchestrator | 2025-04-10 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:17.686770 | orchestrator | 2025-04-10 01:43:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:20.724448 | orchestrator | 2025-04-10 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:20.724598 | orchestrator | 2025-04-10 01:43:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:23.771864 | orchestrator | 2025-04-10 01:43:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:23.771983 | orchestrator | 2025-04-10 01:43:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:26.822852 | orchestrator | 2025-04-10 01:43:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:26.823002 | orchestrator | 2025-04-10 01:43:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:29.868452 | orchestrator | 2025-04-10 01:43:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:29.868593 | orchestrator | 2025-04-10 01:43:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:32.911946 | orchestrator | 2025-04-10 01:43:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:32.912127 | orchestrator | 2025-04-10 01:43:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:35.957162 | orchestrator | 2025-04-10 01:43:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:35.957300 | orchestrator | 2025-04-10 01:43:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:39.003681 | orchestrator | 2025-04-10 01:43:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:39.003824 | orchestrator | 2025-04-10 01:43:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:42.055628 | orchestrator | 2025-04-10 01:43:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:42.055778 | orchestrator | 2025-04-10 01:43:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:45.096998 | orchestrator | 2025-04-10 01:43:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:45.097196 | orchestrator | 2025-04-10 01:43:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:48.136722 | orchestrator | 2025-04-10 01:43:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:48.136868 | orchestrator | 2025-04-10 01:43:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:51.179980 | orchestrator | 2025-04-10 01:43:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:51.180132 | orchestrator | 2025-04-10 01:43:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:54.226438 | orchestrator | 2025-04-10 01:43:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:54.226590 | orchestrator | 2025-04-10 01:43:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:43:57.268089 | orchestrator | 2025-04-10 01:43:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:43:57.268229 | orchestrator | 2025-04-10 01:43:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:00.325504 | orchestrator | 2025-04-10 01:43:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:00.325654 | orchestrator | 2025-04-10 01:44:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:03.371402 | orchestrator | 2025-04-10 01:44:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:03.371553 | orchestrator | 2025-04-10 01:44:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:06.416656 | orchestrator | 2025-04-10 01:44:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:06.416805 | orchestrator | 2025-04-10 01:44:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:09.468561 | orchestrator | 2025-04-10 01:44:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:09.468707 | orchestrator | 2025-04-10 01:44:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:12.522841 | orchestrator | 2025-04-10 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:12.522965 | orchestrator | 2025-04-10 01:44:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:15.573226 | orchestrator | 2025-04-10 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:15.573383 | orchestrator | 2025-04-10 01:44:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:18.615816 | orchestrator | 2025-04-10 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:18.615974 | orchestrator | 2025-04-10 01:44:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:21.662379 | orchestrator | 2025-04-10 01:44:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:21.662521 | orchestrator | 2025-04-10 01:44:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:24.706396 | orchestrator | 2025-04-10 01:44:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:24.706545 | orchestrator | 2025-04-10 01:44:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:27.751573 | orchestrator | 2025-04-10 01:44:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:27.751717 | orchestrator | 2025-04-10 01:44:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:30.800303 | orchestrator | 2025-04-10 01:44:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:30.800447 | orchestrator | 2025-04-10 01:44:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:33.844756 | orchestrator | 2025-04-10 01:44:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:33.844888 | orchestrator | 2025-04-10 01:44:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:36.898720 | orchestrator | 2025-04-10 01:44:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:36.898899 | orchestrator | 2025-04-10 01:44:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:36.899690 | orchestrator | 2025-04-10 01:44:36 | INFO  | Task 682bf9db-20ab-4ff0-baef-f9be1d195659 is in state STARTED 2025-04-10 01:44:39.967291 | orchestrator | 2025-04-10 01:44:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:39.967439 | orchestrator | 2025-04-10 01:44:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:39.968194 | orchestrator | 2025-04-10 01:44:39 | INFO  | Task 682bf9db-20ab-4ff0-baef-f9be1d195659 is in state STARTED 2025-04-10 01:44:39.968597 | orchestrator | 2025-04-10 01:44:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:43.031246 | orchestrator | 2025-04-10 01:44:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:43.031938 | orchestrator | 2025-04-10 01:44:43 | INFO  | Task 682bf9db-20ab-4ff0-baef-f9be1d195659 is in state STARTED 2025-04-10 01:44:43.032128 | orchestrator | 2025-04-10 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:46.095131 | orchestrator | 2025-04-10 01:44:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:46.096416 | orchestrator | 2025-04-10 01:44:46 | INFO  | Task 682bf9db-20ab-4ff0-baef-f9be1d195659 is in state STARTED 2025-04-10 01:44:49.135615 | orchestrator | 2025-04-10 01:44:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:49.135788 | orchestrator | 2025-04-10 01:44:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:49.137101 | orchestrator | 2025-04-10 01:44:49 | INFO  | Task 682bf9db-20ab-4ff0-baef-f9be1d195659 is in state SUCCESS 2025-04-10 01:44:52.182375 | orchestrator | 2025-04-10 01:44:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:52.182520 | orchestrator | 2025-04-10 01:44:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:55.226899 | orchestrator | 2025-04-10 01:44:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:55.227102 | orchestrator | 2025-04-10 01:44:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:44:58.284496 | orchestrator | 2025-04-10 01:44:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:44:58.284759 | orchestrator | 2025-04-10 01:44:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:01.331534 | orchestrator | 2025-04-10 01:44:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:01.331693 | orchestrator | 2025-04-10 01:45:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:04.385608 | orchestrator | 2025-04-10 01:45:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:04.385762 | orchestrator | 2025-04-10 01:45:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:07.427887 | orchestrator | 2025-04-10 01:45:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:07.428084 | orchestrator | 2025-04-10 01:45:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:10.479427 | orchestrator | 2025-04-10 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:10.479538 | orchestrator | 2025-04-10 01:45:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:13.518647 | orchestrator | 2025-04-10 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:13.518791 | orchestrator | 2025-04-10 01:45:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:16.561428 | orchestrator | 2025-04-10 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:16.561580 | orchestrator | 2025-04-10 01:45:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:19.608186 | orchestrator | 2025-04-10 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:19.608348 | orchestrator | 2025-04-10 01:45:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:22.664119 | orchestrator | 2025-04-10 01:45:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:22.664257 | orchestrator | 2025-04-10 01:45:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:25.717527 | orchestrator | 2025-04-10 01:45:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:25.717677 | orchestrator | 2025-04-10 01:45:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:28.764854 | orchestrator | 2025-04-10 01:45:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:28.765049 | orchestrator | 2025-04-10 01:45:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:31.805511 | orchestrator | 2025-04-10 01:45:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:31.805633 | orchestrator | 2025-04-10 01:45:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:34.858106 | orchestrator | 2025-04-10 01:45:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:34.858259 | orchestrator | 2025-04-10 01:45:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:37.905755 | orchestrator | 2025-04-10 01:45:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:37.905897 | orchestrator | 2025-04-10 01:45:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:40.946404 | orchestrator | 2025-04-10 01:45:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:40.946550 | orchestrator | 2025-04-10 01:45:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:43.990928 | orchestrator | 2025-04-10 01:45:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:43.991123 | orchestrator | 2025-04-10 01:45:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:47.048531 | orchestrator | 2025-04-10 01:45:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:47.048722 | orchestrator | 2025-04-10 01:45:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:50.097052 | orchestrator | 2025-04-10 01:45:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:50.097192 | orchestrator | 2025-04-10 01:45:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:53.145473 | orchestrator | 2025-04-10 01:45:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:53.145573 | orchestrator | 2025-04-10 01:45:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:56.195396 | orchestrator | 2025-04-10 01:45:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:56.195514 | orchestrator | 2025-04-10 01:45:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:45:59.251424 | orchestrator | 2025-04-10 01:45:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:45:59.251559 | orchestrator | 2025-04-10 01:45:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:02.298407 | orchestrator | 2025-04-10 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:02.298517 | orchestrator | 2025-04-10 01:46:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:05.336540 | orchestrator | 2025-04-10 01:46:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:05.336708 | orchestrator | 2025-04-10 01:46:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:08.384776 | orchestrator | 2025-04-10 01:46:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:08.385002 | orchestrator | 2025-04-10 01:46:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:11.431445 | orchestrator | 2025-04-10 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:11.431583 | orchestrator | 2025-04-10 01:46:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:14.476327 | orchestrator | 2025-04-10 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:14.476495 | orchestrator | 2025-04-10 01:46:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:17.523112 | orchestrator | 2025-04-10 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:17.523276 | orchestrator | 2025-04-10 01:46:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:20.570528 | orchestrator | 2025-04-10 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:20.570670 | orchestrator | 2025-04-10 01:46:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:23.621333 | orchestrator | 2025-04-10 01:46:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:23.621474 | orchestrator | 2025-04-10 01:46:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:26.663851 | orchestrator | 2025-04-10 01:46:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:26.664056 | orchestrator | 2025-04-10 01:46:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:29.703524 | orchestrator | 2025-04-10 01:46:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:29.703671 | orchestrator | 2025-04-10 01:46:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:32.749810 | orchestrator | 2025-04-10 01:46:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:32.750084 | orchestrator | 2025-04-10 01:46:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:35.791576 | orchestrator | 2025-04-10 01:46:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:35.791755 | orchestrator | 2025-04-10 01:46:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:38.843534 | orchestrator | 2025-04-10 01:46:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:38.843680 | orchestrator | 2025-04-10 01:46:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:41.888612 | orchestrator | 2025-04-10 01:46:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:41.888760 | orchestrator | 2025-04-10 01:46:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:44.934319 | orchestrator | 2025-04-10 01:46:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:44.934447 | orchestrator | 2025-04-10 01:46:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:47.980767 | orchestrator | 2025-04-10 01:46:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:47.980966 | orchestrator | 2025-04-10 01:46:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:51.031276 | orchestrator | 2025-04-10 01:46:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:51.031413 | orchestrator | 2025-04-10 01:46:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:54.077941 | orchestrator | 2025-04-10 01:46:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:54.078172 | orchestrator | 2025-04-10 01:46:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:46:57.129774 | orchestrator | 2025-04-10 01:46:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:46:57.129974 | orchestrator | 2025-04-10 01:46:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:00.181817 | orchestrator | 2025-04-10 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:00.182090 | orchestrator | 2025-04-10 01:47:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:03.232220 | orchestrator | 2025-04-10 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:03.232372 | orchestrator | 2025-04-10 01:47:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:06.280487 | orchestrator | 2025-04-10 01:47:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:06.280634 | orchestrator | 2025-04-10 01:47:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:09.326514 | orchestrator | 2025-04-10 01:47:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:09.326658 | orchestrator | 2025-04-10 01:47:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:12.374442 | orchestrator | 2025-04-10 01:47:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:12.374619 | orchestrator | 2025-04-10 01:47:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:15.426081 | orchestrator | 2025-04-10 01:47:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:15.426219 | orchestrator | 2025-04-10 01:47:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:18.471786 | orchestrator | 2025-04-10 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:18.471995 | orchestrator | 2025-04-10 01:47:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:21.519595 | orchestrator | 2025-04-10 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:21.519734 | orchestrator | 2025-04-10 01:47:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:24.566906 | orchestrator | 2025-04-10 01:47:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:24.567077 | orchestrator | 2025-04-10 01:47:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:27.613117 | orchestrator | 2025-04-10 01:47:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:27.613238 | orchestrator | 2025-04-10 01:47:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:30.661553 | orchestrator | 2025-04-10 01:47:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:30.661699 | orchestrator | 2025-04-10 01:47:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:33.709267 | orchestrator | 2025-04-10 01:47:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:33.709374 | orchestrator | 2025-04-10 01:47:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:36.762481 | orchestrator | 2025-04-10 01:47:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:36.762638 | orchestrator | 2025-04-10 01:47:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:39.809316 | orchestrator | 2025-04-10 01:47:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:39.809463 | orchestrator | 2025-04-10 01:47:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:42.849942 | orchestrator | 2025-04-10 01:47:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:42.850137 | orchestrator | 2025-04-10 01:47:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:45.886558 | orchestrator | 2025-04-10 01:47:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:45.886692 | orchestrator | 2025-04-10 01:47:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:48.932811 | orchestrator | 2025-04-10 01:47:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:48.933010 | orchestrator | 2025-04-10 01:47:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:51.989436 | orchestrator | 2025-04-10 01:47:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:51.989601 | orchestrator | 2025-04-10 01:47:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:55.033732 | orchestrator | 2025-04-10 01:47:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:55.033900 | orchestrator | 2025-04-10 01:47:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:47:58.090276 | orchestrator | 2025-04-10 01:47:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:47:58.090418 | orchestrator | 2025-04-10 01:47:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:01.146012 | orchestrator | 2025-04-10 01:47:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:01.146207 | orchestrator | 2025-04-10 01:48:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:04.192637 | orchestrator | 2025-04-10 01:48:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:04.192782 | orchestrator | 2025-04-10 01:48:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:07.240731 | orchestrator | 2025-04-10 01:48:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:07.240884 | orchestrator | 2025-04-10 01:48:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:10.295054 | orchestrator | 2025-04-10 01:48:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:10.295223 | orchestrator | 2025-04-10 01:48:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:10.295732 | orchestrator | 2025-04-10 01:48:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:13.342490 | orchestrator | 2025-04-10 01:48:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:16.388225 | orchestrator | 2025-04-10 01:48:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:16.388365 | orchestrator | 2025-04-10 01:48:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:19.440651 | orchestrator | 2025-04-10 01:48:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:19.440808 | orchestrator | 2025-04-10 01:48:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:22.481314 | orchestrator | 2025-04-10 01:48:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:22.481454 | orchestrator | 2025-04-10 01:48:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:25.532282 | orchestrator | 2025-04-10 01:48:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:25.532433 | orchestrator | 2025-04-10 01:48:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:28.579704 | orchestrator | 2025-04-10 01:48:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:28.579901 | orchestrator | 2025-04-10 01:48:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:31.635070 | orchestrator | 2025-04-10 01:48:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:31.635217 | orchestrator | 2025-04-10 01:48:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:34.680934 | orchestrator | 2025-04-10 01:48:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:34.681072 | orchestrator | 2025-04-10 01:48:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:37.724195 | orchestrator | 2025-04-10 01:48:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:37.724988 | orchestrator | 2025-04-10 01:48:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:40.769365 | orchestrator | 2025-04-10 01:48:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:40.769476 | orchestrator | 2025-04-10 01:48:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:43.818260 | orchestrator | 2025-04-10 01:48:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:43.818401 | orchestrator | 2025-04-10 01:48:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:46.860646 | orchestrator | 2025-04-10 01:48:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:46.860768 | orchestrator | 2025-04-10 01:48:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:49.912294 | orchestrator | 2025-04-10 01:48:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:49.912452 | orchestrator | 2025-04-10 01:48:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:52.959209 | orchestrator | 2025-04-10 01:48:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:52.959304 | orchestrator | 2025-04-10 01:48:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:56.019487 | orchestrator | 2025-04-10 01:48:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:56.019717 | orchestrator | 2025-04-10 01:48:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:48:59.073128 | orchestrator | 2025-04-10 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:48:59.073266 | orchestrator | 2025-04-10 01:48:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:02.119901 | orchestrator | 2025-04-10 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:02.120049 | orchestrator | 2025-04-10 01:49:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:05.171393 | orchestrator | 2025-04-10 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:05.171546 | orchestrator | 2025-04-10 01:49:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:08.224052 | orchestrator | 2025-04-10 01:49:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:08.224198 | orchestrator | 2025-04-10 01:49:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:11.274389 | orchestrator | 2025-04-10 01:49:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:11.274523 | orchestrator | 2025-04-10 01:49:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:14.322338 | orchestrator | 2025-04-10 01:49:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:14.322488 | orchestrator | 2025-04-10 01:49:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:17.373767 | orchestrator | 2025-04-10 01:49:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:17.374006 | orchestrator | 2025-04-10 01:49:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:20.429289 | orchestrator | 2025-04-10 01:49:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:20.429446 | orchestrator | 2025-04-10 01:49:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:23.475875 | orchestrator | 2025-04-10 01:49:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:23.476035 | orchestrator | 2025-04-10 01:49:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:26.525458 | orchestrator | 2025-04-10 01:49:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:26.525578 | orchestrator | 2025-04-10 01:49:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:29.566330 | orchestrator | 2025-04-10 01:49:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:29.566466 | orchestrator | 2025-04-10 01:49:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:32.616294 | orchestrator | 2025-04-10 01:49:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:32.616413 | orchestrator | 2025-04-10 01:49:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:35.663302 | orchestrator | 2025-04-10 01:49:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:35.663400 | orchestrator | 2025-04-10 01:49:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:38.717003 | orchestrator | 2025-04-10 01:49:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:38.717154 | orchestrator | 2025-04-10 01:49:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:41.763014 | orchestrator | 2025-04-10 01:49:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:41.763160 | orchestrator | 2025-04-10 01:49:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:44.815220 | orchestrator | 2025-04-10 01:49:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:44.815396 | orchestrator | 2025-04-10 01:49:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:47.862257 | orchestrator | 2025-04-10 01:49:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:47.862364 | orchestrator | 2025-04-10 01:49:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:50.913278 | orchestrator | 2025-04-10 01:49:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:50.913442 | orchestrator | 2025-04-10 01:49:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:53.964225 | orchestrator | 2025-04-10 01:49:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:53.964364 | orchestrator | 2025-04-10 01:49:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:49:57.018915 | orchestrator | 2025-04-10 01:49:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:49:57.019013 | orchestrator | 2025-04-10 01:49:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:00.066310 | orchestrator | 2025-04-10 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:00.066445 | orchestrator | 2025-04-10 01:50:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:03.128908 | orchestrator | 2025-04-10 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:03.129057 | orchestrator | 2025-04-10 01:50:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:06.184264 | orchestrator | 2025-04-10 01:50:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:06.184444 | orchestrator | 2025-04-10 01:50:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:09.239608 | orchestrator | 2025-04-10 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:09.239721 | orchestrator | 2025-04-10 01:50:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:12.291095 | orchestrator | 2025-04-10 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:12.291241 | orchestrator | 2025-04-10 01:50:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:15.334114 | orchestrator | 2025-04-10 01:50:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:15.334262 | orchestrator | 2025-04-10 01:50:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:18.381456 | orchestrator | 2025-04-10 01:50:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:18.381634 | orchestrator | 2025-04-10 01:50:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:21.422235 | orchestrator | 2025-04-10 01:50:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:21.422389 | orchestrator | 2025-04-10 01:50:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:24.471350 | orchestrator | 2025-04-10 01:50:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:24.471487 | orchestrator | 2025-04-10 01:50:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:27.521666 | orchestrator | 2025-04-10 01:50:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:27.521853 | orchestrator | 2025-04-10 01:50:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:30.574686 | orchestrator | 2025-04-10 01:50:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:30.574856 | orchestrator | 2025-04-10 01:50:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:33.628197 | orchestrator | 2025-04-10 01:50:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:33.629128 | orchestrator | 2025-04-10 01:50:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:36.676280 | orchestrator | 2025-04-10 01:50:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:36.676430 | orchestrator | 2025-04-10 01:50:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:39.727316 | orchestrator | 2025-04-10 01:50:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:39.727414 | orchestrator | 2025-04-10 01:50:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:42.767734 | orchestrator | 2025-04-10 01:50:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:42.767905 | orchestrator | 2025-04-10 01:50:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:45.807783 | orchestrator | 2025-04-10 01:50:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:45.807974 | orchestrator | 2025-04-10 01:50:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:48.850325 | orchestrator | 2025-04-10 01:50:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:48.850457 | orchestrator | 2025-04-10 01:50:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:51.905050 | orchestrator | 2025-04-10 01:50:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:51.905201 | orchestrator | 2025-04-10 01:50:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:54.952658 | orchestrator | 2025-04-10 01:50:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:54.952900 | orchestrator | 2025-04-10 01:50:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:50:57.990543 | orchestrator | 2025-04-10 01:50:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:50:57.990653 | orchestrator | 2025-04-10 01:50:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:01.033393 | orchestrator | 2025-04-10 01:50:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:01.033493 | orchestrator | 2025-04-10 01:51:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:04.089709 | orchestrator | 2025-04-10 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:04.089887 | orchestrator | 2025-04-10 01:51:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:07.140046 | orchestrator | 2025-04-10 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:07.140183 | orchestrator | 2025-04-10 01:51:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:10.187129 | orchestrator | 2025-04-10 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:10.187280 | orchestrator | 2025-04-10 01:51:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:13.237465 | orchestrator | 2025-04-10 01:51:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:13.237577 | orchestrator | 2025-04-10 01:51:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:16.287370 | orchestrator | 2025-04-10 01:51:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:16.287513 | orchestrator | 2025-04-10 01:51:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:19.335419 | orchestrator | 2025-04-10 01:51:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:19.335554 | orchestrator | 2025-04-10 01:51:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:22.388572 | orchestrator | 2025-04-10 01:51:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:22.388721 | orchestrator | 2025-04-10 01:51:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:25.436061 | orchestrator | 2025-04-10 01:51:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:25.436206 | orchestrator | 2025-04-10 01:51:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:28.486224 | orchestrator | 2025-04-10 01:51:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:28.486375 | orchestrator | 2025-04-10 01:51:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:31.544662 | orchestrator | 2025-04-10 01:51:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:31.544787 | orchestrator | 2025-04-10 01:51:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:34.605194 | orchestrator | 2025-04-10 01:51:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:34.605347 | orchestrator | 2025-04-10 01:51:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:37.658996 | orchestrator | 2025-04-10 01:51:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:37.659165 | orchestrator | 2025-04-10 01:51:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:40.709158 | orchestrator | 2025-04-10 01:51:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:40.709336 | orchestrator | 2025-04-10 01:51:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:43.763096 | orchestrator | 2025-04-10 01:51:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:43.763224 | orchestrator | 2025-04-10 01:51:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:46.809802 | orchestrator | 2025-04-10 01:51:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:46.809984 | orchestrator | 2025-04-10 01:51:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:49.857756 | orchestrator | 2025-04-10 01:51:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:49.857922 | orchestrator | 2025-04-10 01:51:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:52.904523 | orchestrator | 2025-04-10 01:51:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:52.904669 | orchestrator | 2025-04-10 01:51:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:55.953709 | orchestrator | 2025-04-10 01:51:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:55.953934 | orchestrator | 2025-04-10 01:51:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:51:59.012246 | orchestrator | 2025-04-10 01:51:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:51:59.012402 | orchestrator | 2025-04-10 01:51:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:02.051229 | orchestrator | 2025-04-10 01:51:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:02.051395 | orchestrator | 2025-04-10 01:52:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:05.099261 | orchestrator | 2025-04-10 01:52:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:05.099408 | orchestrator | 2025-04-10 01:52:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:08.139543 | orchestrator | 2025-04-10 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:08.139687 | orchestrator | 2025-04-10 01:52:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:11.183317 | orchestrator | 2025-04-10 01:52:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:11.183452 | orchestrator | 2025-04-10 01:52:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:14.234904 | orchestrator | 2025-04-10 01:52:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:14.235057 | orchestrator | 2025-04-10 01:52:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:17.279381 | orchestrator | 2025-04-10 01:52:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:17.279538 | orchestrator | 2025-04-10 01:52:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:20.332143 | orchestrator | 2025-04-10 01:52:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:20.332286 | orchestrator | 2025-04-10 01:52:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:23.379029 | orchestrator | 2025-04-10 01:52:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:23.379180 | orchestrator | 2025-04-10 01:52:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:26.432308 | orchestrator | 2025-04-10 01:52:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:26.432456 | orchestrator | 2025-04-10 01:52:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:29.487113 | orchestrator | 2025-04-10 01:52:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:29.487257 | orchestrator | 2025-04-10 01:52:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:32.533992 | orchestrator | 2025-04-10 01:52:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:32.534204 | orchestrator | 2025-04-10 01:52:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:35.576518 | orchestrator | 2025-04-10 01:52:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:35.576661 | orchestrator | 2025-04-10 01:52:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:38.619461 | orchestrator | 2025-04-10 01:52:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:38.620404 | orchestrator | 2025-04-10 01:52:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:41.662695 | orchestrator | 2025-04-10 01:52:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:41.662875 | orchestrator | 2025-04-10 01:52:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:44.712109 | orchestrator | 2025-04-10 01:52:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:44.712251 | orchestrator | 2025-04-10 01:52:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:47.744575 | orchestrator | 2025-04-10 01:52:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:47.744837 | orchestrator | 2025-04-10 01:52:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:50.793331 | orchestrator | 2025-04-10 01:52:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:50.793474 | orchestrator | 2025-04-10 01:52:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:53.841350 | orchestrator | 2025-04-10 01:52:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:53.841460 | orchestrator | 2025-04-10 01:52:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:56.886359 | orchestrator | 2025-04-10 01:52:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:56.886503 | orchestrator | 2025-04-10 01:52:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:52:59.938309 | orchestrator | 2025-04-10 01:52:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:52:59.938454 | orchestrator | 2025-04-10 01:52:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:02.992831 | orchestrator | 2025-04-10 01:52:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:02.992930 | orchestrator | 2025-04-10 01:53:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:06.037965 | orchestrator | 2025-04-10 01:53:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:06.038149 | orchestrator | 2025-04-10 01:53:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:09.101494 | orchestrator | 2025-04-10 01:53:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:09.101640 | orchestrator | 2025-04-10 01:53:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:12.145163 | orchestrator | 2025-04-10 01:53:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:12.145291 | orchestrator | 2025-04-10 01:53:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:15.196755 | orchestrator | 2025-04-10 01:53:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:15.196916 | orchestrator | 2025-04-10 01:53:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:18.245587 | orchestrator | 2025-04-10 01:53:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:18.245812 | orchestrator | 2025-04-10 01:53:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:21.307439 | orchestrator | 2025-04-10 01:53:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:21.307575 | orchestrator | 2025-04-10 01:53:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:24.355177 | orchestrator | 2025-04-10 01:53:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:24.355325 | orchestrator | 2025-04-10 01:53:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:27.398292 | orchestrator | 2025-04-10 01:53:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:27.398418 | orchestrator | 2025-04-10 01:53:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:30.445806 | orchestrator | 2025-04-10 01:53:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:30.445955 | orchestrator | 2025-04-10 01:53:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:33.493345 | orchestrator | 2025-04-10 01:53:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:33.493487 | orchestrator | 2025-04-10 01:53:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:36.542949 | orchestrator | 2025-04-10 01:53:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:36.543077 | orchestrator | 2025-04-10 01:53:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:39.589189 | orchestrator | 2025-04-10 01:53:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:39.589299 | orchestrator | 2025-04-10 01:53:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:42.638110 | orchestrator | 2025-04-10 01:53:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:42.638260 | orchestrator | 2025-04-10 01:53:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:45.689431 | orchestrator | 2025-04-10 01:53:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:45.690373 | orchestrator | 2025-04-10 01:53:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:48.732825 | orchestrator | 2025-04-10 01:53:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:48.732979 | orchestrator | 2025-04-10 01:53:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:51.780750 | orchestrator | 2025-04-10 01:53:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:51.780897 | orchestrator | 2025-04-10 01:53:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:54.830846 | orchestrator | 2025-04-10 01:53:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:54.830995 | orchestrator | 2025-04-10 01:53:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:53:57.882182 | orchestrator | 2025-04-10 01:53:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:53:57.882299 | orchestrator | 2025-04-10 01:53:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:00.927348 | orchestrator | 2025-04-10 01:53:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:00.927519 | orchestrator | 2025-04-10 01:54:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:03.981239 | orchestrator | 2025-04-10 01:54:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:03.981379 | orchestrator | 2025-04-10 01:54:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:07.030143 | orchestrator | 2025-04-10 01:54:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:07.030391 | orchestrator | 2025-04-10 01:54:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:10.082235 | orchestrator | 2025-04-10 01:54:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:10.082398 | orchestrator | 2025-04-10 01:54:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:13.126914 | orchestrator | 2025-04-10 01:54:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:13.127011 | orchestrator | 2025-04-10 01:54:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:16.175413 | orchestrator | 2025-04-10 01:54:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:16.175541 | orchestrator | 2025-04-10 01:54:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:19.237494 | orchestrator | 2025-04-10 01:54:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:19.237646 | orchestrator | 2025-04-10 01:54:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:22.288985 | orchestrator | 2025-04-10 01:54:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:22.289146 | orchestrator | 2025-04-10 01:54:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:25.333428 | orchestrator | 2025-04-10 01:54:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:25.333554 | orchestrator | 2025-04-10 01:54:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:28.380064 | orchestrator | 2025-04-10 01:54:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:28.380214 | orchestrator | 2025-04-10 01:54:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:31.444799 | orchestrator | 2025-04-10 01:54:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:31.444938 | orchestrator | 2025-04-10 01:54:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:34.482731 | orchestrator | 2025-04-10 01:54:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:34.482875 | orchestrator | 2025-04-10 01:54:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:37.546533 | orchestrator | 2025-04-10 01:54:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:37.546798 | orchestrator | 2025-04-10 01:54:37 | INFO  | Task eb7b2f09-3cfc-4e7d-857b-9a427c972782 is in state STARTED 2025-04-10 01:54:37.547873 | orchestrator | 2025-04-10 01:54:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:40.609979 | orchestrator | 2025-04-10 01:54:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:40.610152 | orchestrator | 2025-04-10 01:54:40 | INFO  | Task eb7b2f09-3cfc-4e7d-857b-9a427c972782 is in state STARTED 2025-04-10 01:54:40.611036 | orchestrator | 2025-04-10 01:54:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:43.678180 | orchestrator | 2025-04-10 01:54:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:43.678348 | orchestrator | 2025-04-10 01:54:43 | INFO  | Task eb7b2f09-3cfc-4e7d-857b-9a427c972782 is in state STARTED 2025-04-10 01:54:43.679783 | orchestrator | 2025-04-10 01:54:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:46.740252 | orchestrator | 2025-04-10 01:54:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:46.740396 | orchestrator | 2025-04-10 01:54:46 | INFO  | Task eb7b2f09-3cfc-4e7d-857b-9a427c972782 is in state STARTED 2025-04-10 01:54:46.744084 | orchestrator | 2025-04-10 01:54:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:49.789105 | orchestrator | 2025-04-10 01:54:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:49.789249 | orchestrator | 2025-04-10 01:54:49 | INFO  | Task eb7b2f09-3cfc-4e7d-857b-9a427c972782 is in state SUCCESS 2025-04-10 01:54:49.790836 | orchestrator | 2025-04-10 01:54:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:52.839036 | orchestrator | 2025-04-10 01:54:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:52.839176 | orchestrator | 2025-04-10 01:54:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:55.881109 | orchestrator | 2025-04-10 01:54:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:55.881253 | orchestrator | 2025-04-10 01:54:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:54:58.924326 | orchestrator | 2025-04-10 01:54:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:54:58.924477 | orchestrator | 2025-04-10 01:54:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:01.975249 | orchestrator | 2025-04-10 01:54:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:01.975391 | orchestrator | 2025-04-10 01:55:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:05.020484 | orchestrator | 2025-04-10 01:55:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:05.020591 | orchestrator | 2025-04-10 01:55:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:08.065992 | orchestrator | 2025-04-10 01:55:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:08.066205 | orchestrator | 2025-04-10 01:55:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:11.106263 | orchestrator | 2025-04-10 01:55:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:11.106371 | orchestrator | 2025-04-10 01:55:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:14.146242 | orchestrator | 2025-04-10 01:55:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:14.146387 | orchestrator | 2025-04-10 01:55:14 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:17.194130 | orchestrator | 2025-04-10 01:55:14 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:17.194270 | orchestrator | 2025-04-10 01:55:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:20.247162 | orchestrator | 2025-04-10 01:55:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:20.247311 | orchestrator | 2025-04-10 01:55:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:23.289703 | orchestrator | 2025-04-10 01:55:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:23.289852 | orchestrator | 2025-04-10 01:55:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:26.340097 | orchestrator | 2025-04-10 01:55:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:26.340293 | orchestrator | 2025-04-10 01:55:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:29.386985 | orchestrator | 2025-04-10 01:55:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:29.387133 | orchestrator | 2025-04-10 01:55:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:32.428069 | orchestrator | 2025-04-10 01:55:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:32.428232 | orchestrator | 2025-04-10 01:55:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:35.478307 | orchestrator | 2025-04-10 01:55:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:35.478452 | orchestrator | 2025-04-10 01:55:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:38.523757 | orchestrator | 2025-04-10 01:55:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:38.523907 | orchestrator | 2025-04-10 01:55:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:41.577257 | orchestrator | 2025-04-10 01:55:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:41.577404 | orchestrator | 2025-04-10 01:55:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:44.616740 | orchestrator | 2025-04-10 01:55:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:44.616873 | orchestrator | 2025-04-10 01:55:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:47.664125 | orchestrator | 2025-04-10 01:55:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:47.664266 | orchestrator | 2025-04-10 01:55:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:50.720494 | orchestrator | 2025-04-10 01:55:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:50.720696 | orchestrator | 2025-04-10 01:55:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:53.771798 | orchestrator | 2025-04-10 01:55:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:53.771891 | orchestrator | 2025-04-10 01:55:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:56.816706 | orchestrator | 2025-04-10 01:55:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:56.816872 | orchestrator | 2025-04-10 01:55:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:55:59.872488 | orchestrator | 2025-04-10 01:55:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:55:59.872597 | orchestrator | 2025-04-10 01:55:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:02.915830 | orchestrator | 2025-04-10 01:55:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:02.916012 | orchestrator | 2025-04-10 01:56:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:05.969810 | orchestrator | 2025-04-10 01:56:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:05.969940 | orchestrator | 2025-04-10 01:56:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:09.013890 | orchestrator | 2025-04-10 01:56:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:09.014083 | orchestrator | 2025-04-10 01:56:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:12.075260 | orchestrator | 2025-04-10 01:56:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:12.076195 | orchestrator | 2025-04-10 01:56:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:15.118546 | orchestrator | 2025-04-10 01:56:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:15.118732 | orchestrator | 2025-04-10 01:56:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:18.167361 | orchestrator | 2025-04-10 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:18.167473 | orchestrator | 2025-04-10 01:56:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:21.227877 | orchestrator | 2025-04-10 01:56:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:21.227997 | orchestrator | 2025-04-10 01:56:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:24.274072 | orchestrator | 2025-04-10 01:56:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:24.274217 | orchestrator | 2025-04-10 01:56:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:27.320056 | orchestrator | 2025-04-10 01:56:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:27.320193 | orchestrator | 2025-04-10 01:56:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:30.361121 | orchestrator | 2025-04-10 01:56:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:30.361243 | orchestrator | 2025-04-10 01:56:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:33.417575 | orchestrator | 2025-04-10 01:56:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:33.417824 | orchestrator | 2025-04-10 01:56:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:36.461270 | orchestrator | 2025-04-10 01:56:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:36.461415 | orchestrator | 2025-04-10 01:56:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:39.514239 | orchestrator | 2025-04-10 01:56:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:39.514338 | orchestrator | 2025-04-10 01:56:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:42.560212 | orchestrator | 2025-04-10 01:56:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:42.560402 | orchestrator | 2025-04-10 01:56:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:45.600431 | orchestrator | 2025-04-10 01:56:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:45.600577 | orchestrator | 2025-04-10 01:56:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:48.644471 | orchestrator | 2025-04-10 01:56:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:48.645593 | orchestrator | 2025-04-10 01:56:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:51.694546 | orchestrator | 2025-04-10 01:56:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:51.694744 | orchestrator | 2025-04-10 01:56:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:54.746899 | orchestrator | 2025-04-10 01:56:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:54.747074 | orchestrator | 2025-04-10 01:56:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:56:57.796240 | orchestrator | 2025-04-10 01:56:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:56:57.796387 | orchestrator | 2025-04-10 01:56:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:00.843129 | orchestrator | 2025-04-10 01:56:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:00.843268 | orchestrator | 2025-04-10 01:57:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:03.892898 | orchestrator | 2025-04-10 01:57:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:03.893011 | orchestrator | 2025-04-10 01:57:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:06.941218 | orchestrator | 2025-04-10 01:57:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:06.941393 | orchestrator | 2025-04-10 01:57:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:09.981995 | orchestrator | 2025-04-10 01:57:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:09.982201 | orchestrator | 2025-04-10 01:57:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:13.030889 | orchestrator | 2025-04-10 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:13.031067 | orchestrator | 2025-04-10 01:57:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:16.079579 | orchestrator | 2025-04-10 01:57:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:16.079722 | orchestrator | 2025-04-10 01:57:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:19.130077 | orchestrator | 2025-04-10 01:57:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:19.130202 | orchestrator | 2025-04-10 01:57:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:22.186253 | orchestrator | 2025-04-10 01:57:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:22.186396 | orchestrator | 2025-04-10 01:57:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:25.231538 | orchestrator | 2025-04-10 01:57:22 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:25.231740 | orchestrator | 2025-04-10 01:57:25 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:28.272885 | orchestrator | 2025-04-10 01:57:25 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:28.273009 | orchestrator | 2025-04-10 01:57:28 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:31.327511 | orchestrator | 2025-04-10 01:57:28 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:31.327712 | orchestrator | 2025-04-10 01:57:31 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:34.379969 | orchestrator | 2025-04-10 01:57:31 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:34.380115 | orchestrator | 2025-04-10 01:57:34 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:37.425059 | orchestrator | 2025-04-10 01:57:34 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:37.425234 | orchestrator | 2025-04-10 01:57:37 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:40.469978 | orchestrator | 2025-04-10 01:57:37 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:40.470228 | orchestrator | 2025-04-10 01:57:40 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:43.518099 | orchestrator | 2025-04-10 01:57:40 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:43.518251 | orchestrator | 2025-04-10 01:57:43 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:46.562384 | orchestrator | 2025-04-10 01:57:43 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:46.562530 | orchestrator | 2025-04-10 01:57:46 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:49.601875 | orchestrator | 2025-04-10 01:57:46 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:49.602822 | orchestrator | 2025-04-10 01:57:49 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:52.647985 | orchestrator | 2025-04-10 01:57:49 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:52.648091 | orchestrator | 2025-04-10 01:57:52 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:55.694963 | orchestrator | 2025-04-10 01:57:52 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:55.695105 | orchestrator | 2025-04-10 01:57:55 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:57:58.739768 | orchestrator | 2025-04-10 01:57:55 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:57:58.739888 | orchestrator | 2025-04-10 01:57:58 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:01.780655 | orchestrator | 2025-04-10 01:57:58 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:01.780797 | orchestrator | 2025-04-10 01:58:01 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:04.831978 | orchestrator | 2025-04-10 01:58:01 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:04.832123 | orchestrator | 2025-04-10 01:58:04 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:07.878767 | orchestrator | 2025-04-10 01:58:04 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:07.878908 | orchestrator | 2025-04-10 01:58:07 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:10.927407 | orchestrator | 2025-04-10 01:58:07 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:10.927582 | orchestrator | 2025-04-10 01:58:10 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:13.973195 | orchestrator | 2025-04-10 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:13.973338 | orchestrator | 2025-04-10 01:58:13 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:17.023871 | orchestrator | 2025-04-10 01:58:13 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:17.024018 | orchestrator | 2025-04-10 01:58:17 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:20.079015 | orchestrator | 2025-04-10 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:20.079185 | orchestrator | 2025-04-10 01:58:20 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:23.124579 | orchestrator | 2025-04-10 01:58:20 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:23.124766 | orchestrator | 2025-04-10 01:58:23 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:26.176346 | orchestrator | 2025-04-10 01:58:23 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:26.176514 | orchestrator | 2025-04-10 01:58:26 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:29.235001 | orchestrator | 2025-04-10 01:58:26 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:29.235136 | orchestrator | 2025-04-10 01:58:29 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:32.302194 | orchestrator | 2025-04-10 01:58:29 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:32.302337 | orchestrator | 2025-04-10 01:58:32 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:35.357386 | orchestrator | 2025-04-10 01:58:32 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:35.357546 | orchestrator | 2025-04-10 01:58:35 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:38.408946 | orchestrator | 2025-04-10 01:58:35 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:38.409090 | orchestrator | 2025-04-10 01:58:38 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:41.466715 | orchestrator | 2025-04-10 01:58:38 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:41.466861 | orchestrator | 2025-04-10 01:58:41 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:44.521260 | orchestrator | 2025-04-10 01:58:41 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:44.521438 | orchestrator | 2025-04-10 01:58:44 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:47.567378 | orchestrator | 2025-04-10 01:58:44 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:47.567528 | orchestrator | 2025-04-10 01:58:47 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:50.623430 | orchestrator | 2025-04-10 01:58:47 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:50.623568 | orchestrator | 2025-04-10 01:58:50 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:53.669005 | orchestrator | 2025-04-10 01:58:50 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:53.669165 | orchestrator | 2025-04-10 01:58:53 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:56.722764 | orchestrator | 2025-04-10 01:58:53 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:56.722910 | orchestrator | 2025-04-10 01:58:56 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:58:59.774311 | orchestrator | 2025-04-10 01:58:56 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:58:59.774457 | orchestrator | 2025-04-10 01:58:59 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:02.821667 | orchestrator | 2025-04-10 01:58:59 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:02.821826 | orchestrator | 2025-04-10 01:59:02 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:05.870133 | orchestrator | 2025-04-10 01:59:02 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:05.870284 | orchestrator | 2025-04-10 01:59:05 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:08.924153 | orchestrator | 2025-04-10 01:59:05 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:08.924294 | orchestrator | 2025-04-10 01:59:08 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:11.977368 | orchestrator | 2025-04-10 01:59:08 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:11.977542 | orchestrator | 2025-04-10 01:59:11 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:15.026288 | orchestrator | 2025-04-10 01:59:11 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:15.026436 | orchestrator | 2025-04-10 01:59:15 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:18.078326 | orchestrator | 2025-04-10 01:59:15 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:18.078486 | orchestrator | 2025-04-10 01:59:18 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:21.129535 | orchestrator | 2025-04-10 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:21.129743 | orchestrator | 2025-04-10 01:59:21 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:24.178073 | orchestrator | 2025-04-10 01:59:21 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:24.178219 | orchestrator | 2025-04-10 01:59:24 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:27.237241 | orchestrator | 2025-04-10 01:59:24 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:27.237379 | orchestrator | 2025-04-10 01:59:27 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:30.292900 | orchestrator | 2025-04-10 01:59:27 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:30.293067 | orchestrator | 2025-04-10 01:59:30 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:33.347309 | orchestrator | 2025-04-10 01:59:30 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:33.347449 | orchestrator | 2025-04-10 01:59:33 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:36.385142 | orchestrator | 2025-04-10 01:59:33 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:36.385298 | orchestrator | 2025-04-10 01:59:36 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:39.425360 | orchestrator | 2025-04-10 01:59:36 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:39.425498 | orchestrator | 2025-04-10 01:59:39 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:42.476489 | orchestrator | 2025-04-10 01:59:39 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:42.476681 | orchestrator | 2025-04-10 01:59:42 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:45.525150 | orchestrator | 2025-04-10 01:59:42 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:45.525301 | orchestrator | 2025-04-10 01:59:45 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:48.567828 | orchestrator | 2025-04-10 01:59:45 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:48.567952 | orchestrator | 2025-04-10 01:59:48 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:51.615239 | orchestrator | 2025-04-10 01:59:48 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:51.615387 | orchestrator | 2025-04-10 01:59:51 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:54.669276 | orchestrator | 2025-04-10 01:59:51 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:54.669426 | orchestrator | 2025-04-10 01:59:54 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 01:59:57.712921 | orchestrator | 2025-04-10 01:59:54 | INFO  | Wait 1 second(s) until the next check 2025-04-10 01:59:57.713105 | orchestrator | 2025-04-10 01:59:57 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:00.760954 | orchestrator | 2025-04-10 01:59:57 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:00.761102 | orchestrator | 2025-04-10 02:00:00 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:03.828026 | orchestrator | 2025-04-10 02:00:00 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:03.828147 | orchestrator | 2025-04-10 02:00:03 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:06.880892 | orchestrator | 2025-04-10 02:00:03 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:06.881042 | orchestrator | 2025-04-10 02:00:06 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:09.928040 | orchestrator | 2025-04-10 02:00:06 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:09.928190 | orchestrator | 2025-04-10 02:00:09 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:12.981038 | orchestrator | 2025-04-10 02:00:09 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:12.981182 | orchestrator | 2025-04-10 02:00:12 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:16.041981 | orchestrator | 2025-04-10 02:00:12 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:16.042233 | orchestrator | 2025-04-10 02:00:16 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:19.096070 | orchestrator | 2025-04-10 02:00:16 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:19.096209 | orchestrator | 2025-04-10 02:00:19 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:22.143657 | orchestrator | 2025-04-10 02:00:19 | INFO  | Wait 1 second(s) until the next check 2025-04-10 02:00:22.143806 | orchestrator | 2025-04-10 02:00:22 | INFO  | Task 6ceb9c04-fa5a-4943-be37-e776008b03dc is in state STARTED 2025-04-10 02:00:22.385954 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-10 02:00:22.394401 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-10 02:00:23.078171 | 2025-04-10 02:00:23.078977 | PLAY [Post output play] 2025-04-10 02:00:23.108191 | 2025-04-10 02:00:23.108316 | LOOP [stage-output : Register sources] 2025-04-10 02:00:23.183932 | 2025-04-10 02:00:23.184171 | TASK [stage-output : Check sudo] 2025-04-10 02:00:23.945827 | orchestrator | sudo: a password is required 2025-04-10 02:00:24.239098 | orchestrator | ok: Runtime: 0:00:00.017189 2025-04-10 02:00:24.256240 | 2025-04-10 02:00:24.256416 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-10 02:00:24.296473 | 2025-04-10 02:00:24.296782 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-10 02:00:24.389047 | orchestrator | ok 2025-04-10 02:00:24.399763 | 2025-04-10 02:00:24.399880 | LOOP [stage-output : Ensure target folders exist] 2025-04-10 02:00:24.851691 | orchestrator | ok: "docs" 2025-04-10 02:00:24.852058 | 2025-04-10 02:00:25.109628 | orchestrator | ok: "artifacts" 2025-04-10 02:00:25.358740 | orchestrator | ok: "logs" 2025-04-10 02:00:25.384707 | 2025-04-10 02:00:25.384867 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-10 02:00:25.427831 | 2025-04-10 02:00:25.428083 | TASK [stage-output : Make all log files readable] 2025-04-10 02:00:25.724643 | orchestrator | ok 2025-04-10 02:00:25.735334 | 2025-04-10 02:00:25.735453 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-10 02:00:25.781398 | orchestrator | skipping: Conditional result was False 2025-04-10 02:00:25.794444 | 2025-04-10 02:00:25.794577 | TASK [stage-output : Discover log files for compression] 2025-04-10 02:00:25.819663 | orchestrator | skipping: Conditional result was False 2025-04-10 02:00:25.835186 | 2025-04-10 02:00:25.835316 | LOOP [stage-output : Archive everything from logs] 2025-04-10 02:00:25.902233 | 2025-04-10 02:00:25.902372 | PLAY [Post cleanup play] 2025-04-10 02:00:25.926178 | 2025-04-10 02:00:25.926275 | TASK [Set cloud fact (Zuul deployment)] 2025-04-10 02:00:25.992842 | orchestrator | ok 2025-04-10 02:00:26.003501 | 2025-04-10 02:00:26.003652 | TASK [Set cloud fact (local deployment)] 2025-04-10 02:00:26.038956 | orchestrator | skipping: Conditional result was False 2025-04-10 02:00:26.057457 | 2025-04-10 02:00:26.057657 | TASK [Clean the cloud environment] 2025-04-10 02:00:26.743495 | orchestrator | 2025-04-10 02:00:26 - clean up servers 2025-04-10 02:00:27.690160 | orchestrator | 2025-04-10 02:00:27 - testbed-manager 2025-04-10 02:00:27.785509 | orchestrator | 2025-04-10 02:00:27 - testbed-node-0 2025-04-10 02:00:27.880667 | orchestrator | 2025-04-10 02:00:27 - testbed-node-2 2025-04-10 02:00:27.975348 | orchestrator | 2025-04-10 02:00:27 - testbed-node-4 2025-04-10 02:00:28.084715 | orchestrator | 2025-04-10 02:00:28 - testbed-node-5 2025-04-10 02:00:28.186410 | orchestrator | 2025-04-10 02:00:28 - testbed-node-3 2025-04-10 02:00:28.283939 | orchestrator | 2025-04-10 02:00:28 - testbed-node-1 2025-04-10 02:00:28.375284 | orchestrator | 2025-04-10 02:00:28 - clean up keypairs 2025-04-10 02:00:28.396158 | orchestrator | 2025-04-10 02:00:28 - testbed 2025-04-10 02:00:28.429792 | orchestrator | 2025-04-10 02:00:28 - wait for servers to be gone 2025-04-10 02:00:37.655025 | orchestrator | 2025-04-10 02:00:37 - clean up ports 2025-04-10 02:00:37.870645 | orchestrator | 2025-04-10 02:00:37 - 046cd395-8908-4cb1-b76c-c581d3ba991a 2025-04-10 02:00:38.048371 | orchestrator | 2025-04-10 02:00:38 - 086380e6-4501-4281-83cb-6fafb5322da3 2025-04-10 02:00:38.566739 | orchestrator | 2025-04-10 02:00:38 - 37e6858d-0b12-45b6-a737-d195ed607fff 2025-04-10 02:00:38.876705 | orchestrator | 2025-04-10 02:00:38 - 5d7d56d5-f3c6-4ebf-8d8f-b6f588bffa0f 2025-04-10 02:00:39.101179 | orchestrator | 2025-04-10 02:00:39 - a27f62e8-ea26-4216-a857-5cb79f06e033 2025-04-10 02:00:39.311326 | orchestrator | 2025-04-10 02:00:39 - cf5aa8a8-8732-4e78-89e4-b04b572c5f93 2025-04-10 02:00:39.567709 | orchestrator | 2025-04-10 02:00:39 - f1804656-7ed6-4b10-a60c-40dcc8d195d2 2025-04-10 02:00:39.767717 | orchestrator | 2025-04-10 02:00:39 - clean up volumes 2025-04-10 02:00:39.907964 | orchestrator | 2025-04-10 02:00:39 - testbed-volume-4-node-base 2025-04-10 02:00:39.952731 | orchestrator | 2025-04-10 02:00:39 - testbed-volume-0-node-base 2025-04-10 02:00:40.000843 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-manager-base 2025-04-10 02:00:40.047631 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-2-node-base 2025-04-10 02:00:40.091765 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-1-node-base 2025-04-10 02:00:40.138086 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-5-node-base 2025-04-10 02:00:40.182069 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-3-node-base 2025-04-10 02:00:40.227204 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-10-node-4 2025-04-10 02:00:40.270293 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-8-node-2 2025-04-10 02:00:40.316954 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-17-node-5 2025-04-10 02:00:40.360174 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-1-node-1 2025-04-10 02:00:40.403866 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-6-node-0 2025-04-10 02:00:40.448492 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-13-node-1 2025-04-10 02:00:40.490237 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-15-node-3 2025-04-10 02:00:40.541106 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-12-node-0 2025-04-10 02:00:40.580562 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-7-node-1 2025-04-10 02:00:40.626530 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-16-node-4 2025-04-10 02:00:40.670757 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-3-node-3 2025-04-10 02:00:40.712202 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-0-node-0 2025-04-10 02:00:40.755683 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-2-node-2 2025-04-10 02:00:40.800035 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-4-node-4 2025-04-10 02:00:40.843287 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-11-node-5 2025-04-10 02:00:40.890673 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-14-node-2 2025-04-10 02:00:40.939499 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-9-node-3 2025-04-10 02:00:40.980309 | orchestrator | 2025-04-10 02:00:40 - testbed-volume-5-node-5 2025-04-10 02:00:41.023150 | orchestrator | 2025-04-10 02:00:41 - disconnect routers 2025-04-10 02:00:41.123040 | orchestrator | 2025-04-10 02:00:41 - testbed 2025-04-10 02:00:41.990255 | orchestrator | 2025-04-10 02:00:41 - clean up subnets 2025-04-10 02:00:42.026305 | orchestrator | 2025-04-10 02:00:42 - subnet-testbed-management 2025-04-10 02:00:42.154341 | orchestrator | 2025-04-10 02:00:42 - clean up networks 2025-04-10 02:00:42.366305 | orchestrator | 2025-04-10 02:00:42 - net-testbed-management 2025-04-10 02:00:42.630310 | orchestrator | 2025-04-10 02:00:42 - clean up security groups 2025-04-10 02:00:42.662378 | orchestrator | 2025-04-10 02:00:42 - testbed-node 2025-04-10 02:00:42.754367 | orchestrator | 2025-04-10 02:00:42 - testbed-management 2025-04-10 02:00:42.842226 | orchestrator | 2025-04-10 02:00:42 - clean up floating ips 2025-04-10 02:00:42.874327 | orchestrator | 2025-04-10 02:00:42 - 81.163.192.103 2025-04-10 02:00:43.262311 | orchestrator | 2025-04-10 02:00:43 - clean up routers 2025-04-10 02:00:43.314302 | orchestrator | 2025-04-10 02:00:43 - testbed 2025-04-10 02:00:45.118379 | orchestrator | changed 2025-04-10 02:00:45.161198 | 2025-04-10 02:00:45.161316 | PLAY RECAP 2025-04-10 02:00:45.161370 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-10 02:00:45.161395 | 2025-04-10 02:00:45.274670 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-10 02:00:45.277921 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-10 02:00:45.954766 | 2025-04-10 02:00:45.954913 | PLAY [Base post-fetch] 2025-04-10 02:00:45.984133 | 2025-04-10 02:00:45.984257 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-10 02:00:46.050858 | orchestrator | skipping: Conditional result was False 2025-04-10 02:00:46.064003 | 2025-04-10 02:00:46.064159 | TASK [fetch-output : Set log path for single node] 2025-04-10 02:00:46.113688 | orchestrator | ok 2025-04-10 02:00:46.122702 | 2025-04-10 02:00:46.122816 | LOOP [fetch-output : Ensure local output dirs] 2025-04-10 02:00:46.589120 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/work/logs" 2025-04-10 02:00:46.853300 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/work/artifacts" 2025-04-10 02:00:47.122813 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d16e7eccd07141d892bb7c877e6612d7/work/docs" 2025-04-10 02:00:47.143789 | 2025-04-10 02:00:47.143906 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-10 02:00:47.948072 | orchestrator | changed: .d..t...... ./ 2025-04-10 02:00:47.948605 | orchestrator | changed: All items complete 2025-04-10 02:00:47.948678 | 2025-04-10 02:00:48.545288 | orchestrator | changed: .d..t...... ./ 2025-04-10 02:00:49.127059 | orchestrator | changed: .d..t...... ./ 2025-04-10 02:00:49.159161 | 2025-04-10 02:00:49.159287 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-10 02:00:49.204129 | orchestrator | skipping: Conditional result was False 2025-04-10 02:00:49.211011 | orchestrator | skipping: Conditional result was False 2025-04-10 02:00:49.263015 | 2025-04-10 02:00:49.263103 | PLAY RECAP 2025-04-10 02:00:49.263157 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-10 02:00:49.263184 | 2025-04-10 02:00:49.370628 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-10 02:00:49.373814 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-10 02:00:50.063930 | 2025-04-10 02:00:50.064084 | PLAY [Base post] 2025-04-10 02:00:50.092430 | 2025-04-10 02:00:50.092562 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-10 02:00:51.288426 | orchestrator | changed 2025-04-10 02:00:51.329664 | 2025-04-10 02:00:51.329796 | PLAY RECAP 2025-04-10 02:00:51.329862 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-10 02:00:51.329924 | 2025-04-10 02:00:51.439015 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-10 02:00:51.447541 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-10 02:00:52.165528 | 2025-04-10 02:00:52.165689 | PLAY [Base post-logs] 2025-04-10 02:00:52.181165 | 2025-04-10 02:00:52.181287 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-10 02:00:52.631234 | localhost | changed 2025-04-10 02:00:52.638096 | 2025-04-10 02:00:52.638285 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-10 02:00:52.669348 | localhost | ok 2025-04-10 02:00:52.679364 | 2025-04-10 02:00:52.679492 | TASK [Set zuul-log-path fact] 2025-04-10 02:00:52.710152 | localhost | ok 2025-04-10 02:00:52.724277 | 2025-04-10 02:00:52.724435 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-10 02:00:52.764319 | localhost | ok 2025-04-10 02:00:52.773781 | 2025-04-10 02:00:52.773924 | TASK [upload-logs : Create log directories] 2025-04-10 02:00:53.287220 | localhost | changed 2025-04-10 02:00:53.293703 | 2025-04-10 02:00:53.293835 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-10 02:00:53.789528 | localhost -> localhost | ok: Runtime: 0:00:00.006572 2025-04-10 02:00:53.800773 | 2025-04-10 02:00:53.800952 | TASK [upload-logs : Upload logs to log server] 2025-04-10 02:00:54.349113 | localhost | Output suppressed because no_log was given 2025-04-10 02:00:54.353940 | 2025-04-10 02:00:54.354093 | LOOP [upload-logs : Compress console log and json output] 2025-04-10 02:00:54.418049 | localhost | skipping: Conditional result was False 2025-04-10 02:00:54.434676 | localhost | skipping: Conditional result was False 2025-04-10 02:00:54.446222 | 2025-04-10 02:00:54.446372 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-10 02:00:54.518002 | localhost | skipping: Conditional result was False 2025-04-10 02:00:54.518590 | 2025-04-10 02:00:54.533305 | localhost | skipping: Conditional result was False 2025-04-10 02:00:54.549115 | 2025-04-10 02:00:54.549239 | LOOP [upload-logs : Upload console log and json output]